DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Illustrated below is a summary of the mapping between claims of the application 18/559159 ad corresponding claims of copending Application No. 18/559159. Also note method and system claims are obvious variations.
Current Application
1
2
3
4
5
6
7
8
9
10
11
Copending Application
1
2
3
4
5
6
7
8
7,8
7,8
7,8
Current Application
12
Copending Application
9
Current Application
13
Copending Application
10
Claims 1, 12, and 13 is provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 2 and 7 of copending Application No. 18/559159.
Claims 1 of the instant application include all of the limitations of claims 1, 2 and 7, respectively, of US Copending Application No. 18/559159 as follows:
Claims of Instant Application 13/971885
Claims of U.S. Copending Application 18/559159
1
1
An endoscopic examination support apparatus comprising: a memory configured to store instructions; anda processor configured to execute the instructions to:
An endoscopic examination support apparatus comprising: a memory configured to store instructions; and a processor configured to execute the instructions to:
acquire endoscopic images acquired by imaging an interior of a luminal organ with an endoscope camera; generate a three-dimensional model of the luminal organ in which the endoscope camera is placed, based on endoscopic images;detect an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; acquire subject information indicating a part of the luminal organ to which a subject imaged by the endoscope camera corresponds based on the endoscopic image;perform mapping processing of associating the unobserved area with a luminal organ model on a basis of a detection result of the unobserved area, and the subject information; and
generate a three-dimensional model of a luminal organ in which an endoscope camera is placed, based on endoscopic images acquired by imaging an interior of the luminal organ with the endoscope camera; detect an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; and
generate a display image including information capable of identifying the unobserved area existing in an inner wall of a ventral side of the luminal organ and the unobserved area existing in the inner wall of a back side of the luminal organ.
generate a display image including, as information indicating an observation achievement degree for each of a plurality of sites of the luminal organ, information indicating a position of the detected unobserved area in the plurality of sites, information indicating a position of an observed area which does not corresponds to the unobserved area in the plurality of sites, and information capable of identifying a non-observation factor associated with a detection result of the unobserved area, based on the detection result of the unobserved area.
12
9
An endoscopic examination support method comprising: acquiring endoscopic images acquired by imaging an interior of a luminal organ with the endoscope camera;generating a three-dimensional model of the luminal organ in which the endoscope camera is placed, based on endoscopic images;
An endoscopic examination support method comprising: generating a three-dimensional model of a luminal organ in which an endoscope camera is placed, based on endoscopic images acquired by imaging an interior of the luminal organ with the endoscope camera;
detecting an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; acquiring subject information indicating a part of the luminal organ to which a subject imaged by the endoscope camera corresponds based on the endoscopic image;
detecting an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; and generating a display image-including,
performing mapping processing of associating the unobserved area with a luminal organ model on a basis of a detection result of the unobserved area, and the subject information; and generating a display image including information capable of identifying the unobserved area existing in an inner wall of a ventral side of the luminal organ and the unobserved area existing in the inner wall of a back side of the luminal organ.
as information indicating an observation achievement degree for each of a plurality of sites of the luminal organ, information indicating a position of the detected unobserved area in the plurality of sites, information indicating a position of an observed area which does not corresponds to the unobserved area in the plurality of sites, and information capable of identifying a non-observation factor associated with a detection result of the unobserved area, based on the detection result of the unobserved area.
13
10
A non-transitory computer-readable recording medium recording a program, the program causing a computer to execute: acquiring endoscopic images acquired by imaging an interior of a luminal organ with an endoscope camera;
A non-transitory computer-readable recording medium recording a program, the program causing a computer to execute:
generating a three-dimensional model of the luminal organ in which the endoscope camera is placed, based on endoscopic images; detecting an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model;
generating a three-dimensional model of a luminal organ in which an endoscope camera is placed, based on endoscopic images obtained by imaging an interior of the luminal organ with the endoscope camera; detecting an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model; and
acquiring subject information indicating a part of the luminal organ to which a subject imaged by the endoscope camera corresponds based on the endoscopic image; performing mapping processing of associating the unobserved area with a luminal organ model on a basis of a detection result of the unobserved area, and the subject information; and generating a display image including information capable of identifying the unobserved area existing in an inner wall of a ventral side of the luminal organ and the unobserved area existing in the inner wall of a back side of the luminal organ.
generating a display image-including, as information indicating an observation achievement degree for each of a plurality of sites of the luminal organ, information indicating a position of the detected unobserved area in the plurality of sites, information indicating a position of an observed area which does not corresponds to the unobserved area in the plurality of sites, and information capable of identifying a non-observation factor associated with a detection result of the unobserved area, based on the detection result of the unobserved area.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 12, an 13 are rejected under 35 U.S.C. 103 as being unpatentable over Saphier et al. (Publication: US 2021/0321872 A1) in view of Natsuko et al. (Patent: US 6,252,599 B1).
Regarding claim 1, Saphier discloses an endoscopic examination support apparatus comprising: a memory configured to store instructions; and a processor configured to execute the instructions to ([0209], [0274], [0665] - Fig. 1, Computing device 105 may be coupled to one or more intraoral scanner, the light may pass through an endoscopic probing member , which may include a rigid, light-transmitting medium, which may be a hollow object defining within it a light transmission path or an object made of a light transmitting material, e.g. a glass body or tube. Computer includes memory includes instructions to be processed by the processor to perform:):
acquire endoscopic images acquired by imaging an interior of a luminal organ with an endoscope camera ([0223], [0665] – Intraoral scanners, endoscopic prob., may work by moving the scanner 150 inside a patient's organ to capture image [0233]. Mouth is interior organ.);
generate a three-dimensional model of the luminal organ in which the endoscope camera is placed, based on endoscopic images ([0146], [0665] – images are formed by stitching together in 3D in the intraoral scans, endoscopic prob. Mouth is interior organ);
detect an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model ([0118] - processing at least one of the intraoral scan or data from a three-dimensional surface generated from the intraoral scan and one or more additional intraoral scans using a trained machine learning model that has been trained to identify regions in intraoral scans obscured by a dirty probe or a dirty sleeve over the probe.
[0146], [0665] – images are formed by stitching together in 3D in the intraoral scans, endoscopic prob. Mouth is interior organ);
acquire subject information indicating a part of the luminal organ to which a subject imaged by the endoscope camera corresponds based on the endoscopic image ([0311] In an example, intraoral scan application 115 may include logic for automatically identifying (e.g., highlighting) a margin line in an image and/or 3D model of a preparation tooth. This may make it easier for the doctor to inspect the margin line for accuracy. Intraoral scan application 115 may additionally mark and/or highlight specific segments of the margin line that are unclear, uncertain, and/or indeterminate. Additionally, or alternatively, intraoral scan application 115 may mark and/or highlight specific areas (e.g., a surface) that is unclear, uncertain and/or indeterminate.
Mouth is interior organ);
perform mapping processing of associating the unobserved area with a luminal organ model on a basis of a detection result of the unobserved area, and the subject information ([0311] - intraoral scan application 115 may include logic for automatically identifying (e.g., highlighting) a margin line in an image and/or 3D model of a preparation tooth. This may make it easier for the doctor to inspect the margin line for accuracy. Intraoral scan application 115 may additionally mark and/or highlight specific segments of the margin line that are unclear, uncertain, and/or indeterminate. Additionally, or alternatively, intraoral scan application 115 may mark and/or highlight specific areas (e.g., a surface) that is unclear, uncertain and/or indeterminate, “mapping”.); and
generate a display image including information capable of identifying the unobserved area ([0311] - intraoral scan application 115 may include logic for automatically identifying (e.g., highlighting) a margin line in an image and/or 3D model of a preparation tooth. This may make it easier for the doctor to inspect the margin line for accuracy. Intraoral scan application 115 may additionally mark and/or highlight specific segments of the margin line that are unclear, uncertain, and/or indeterminate. Additionally, or alternatively, intraoral scan application 115 may mark and/or highlight specific areas (e.g., a surface) that is unclear, uncertain and/or indeterminate.
[0073], [0074 – display the highlighting model. ) .
Saphier does not Natsuko discloses
area existing in an inner wall of a ventral side of the luminal organ and the area existing in the inner wall of a back side of the luminal organ (column 1 line 60 to 65 - producing a three-dimensional (3D) image representing a wall surface of 3D tissue as seen from a viewpoint lying in the internal space formed by the 3D tissue; producing a tomographic image at a cross-sectional location defined as lying near the viewpoint or a two-dimensional (2D) image obtained by processing the tomographic image; and displaying the tomographic image or the 2D image juxtaposed or superimposed with the 3D image.
The corss-sectional shows the inner wall of a ventral side and the inner wall of a back side of the organ.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier with area existing in an inner wall of a ventral side of the luminal organ and the area existing in the inner wall of a back side of the luminal organ as taught by Natsuko. The motivation for doing is to view internal condition.
Regarding claim 12, see rejection on claim 1.
Regarding claim 13, see rejection on claim 1.
Claims 2, 3, 4, 5, 6, 7, 9, 10 are rejected under 35 U.S.C. 103 as being unpatentable over Saphier et al. (Publication: US 2021/0321872 A1) in view of Natsuko et al. (Patent: US 6,252,599 B1) and Yaroslavsky et al. (Publication: US 2004/0249274 A1)
Regarding claim 2, Saphier in view of Natsuko disclose all the limitation of claim 1.
Saphier in view of Natsuko do not Yaroslavsky disclose
Yaroslavsky discloses including information indicating a position of the unobserved area at each of the plurality of sites ([0049] Example images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining. Conventional and superficial (I.sub.66.sup.620) images of the same specimen stained with TB are shown in FIGS. 7c and 7d. In both images the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko with including information indicating a position of the unobserved area at each of the plurality of sites as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 3, Saphier in view of Natsuko disclose all the limitation of claim 2.
Yaroslavsky discloses wherein the information indicating the position of the unobserved area at each of a plurality of sites is added to the luminal organ model or a luminal organ image created in advance based on a structure of the general luminal organs ([0047] Example images of infiltrative morpheaform BCC obtained at the wavelengths of 410 nm and 610 nm before and after staining are presented in FIGS. 5a-5c and 6a-6d. The images acquired at the wavelength of 410 nm before and after contrast agent application are presented in FIGS. 5a and 5b, respectively, “in advance”.
[0049] Example images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining, “unobserved”. Conventional and superficial (I.sub.66.sup.620) images of the same specimen stained with TB are shown in FIGS. 7c and 7d. In both images the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images. ).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with wherein the information indicating the position of the unobserved area at each of a plurality of sites is added to the luminal organ model or a luminal organ image created in advance based on a structure of the general luminal organs as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 4, Saphier in view of Natsuko disclose all the limitation of claim 3.
Yaroslavsky discloses to reproduce the image of the observed area in a vicinity of the designated one unobserved area when one unobserved area in the luminal organ model or the luminal organ image ([0049] Example images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining, “unobserved”. Conventional and superficial (I.sub.66.sup.620) images of the same specimen stained with TB are shown in FIGS. 7c and 7d. In both images the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images.
[0028] - forming an image from an intensity difference between the detected light having said first polarization and the detected light from said second polarization “reproduce”.
[0042] - The first image is representative of the remitted light in a direction parallel to the incident light. The second image is representative of the remitted light in a direction perpendicular to the incident light. The first and second images are transmitted to computer 1 or the like to produce a difference image.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with to reproduce the image of the observed area in a vicinity of the designated one unobserved area when one unobserved area in the luminal organ model or the luminal organ image as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 5, Saphier in view of Natsuko disclose all the limitation of claim 1.
Yaroslavsky discloses to generate the [[display]] image including information for identifying the unobserved area at each of the plurality of sites and an observed area at each of the plurality of sites, the observed area being an area which is not corresponding to the unobserved area ([0042] - The first image is representative of the remitted light in a direction parallel to the incident light. The second image is representative of the remitted light in a direction perpendicular to the incident light. The first and second images are transmitted to computer 1 or the like to produce a difference image.
[0049] Example images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining, Conventional and superficial (I.sub.66.sup.620) images of the same specimen stained waith TB are shown in FIGS. 7c and 7d, ““observed area and unobserved area”.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with to generate the [[display]] image including information for identifying the unobserved area at each of the plurality of sites and an observed area at each of the plurality of sites, the observed area being an area which is not corresponding to the unobserved area as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 6, Saphier in view of Natsuko disclose all the limitation of claim 1.
Saphier discloses to generate the image including information for identifying a non-observation factor associated with a detection result of the unobserved area
for identifying a non-observation factor associated with a detection result of the unobserved area (
[0296] for generated image area, missing areas for intraoral scan data. This can include missing scan data of a palate, unscanned teeth, incomplete scanning of teeth, holes or voids in scanning information (e.g., voids above a threshold size), If insufficient 2D color images of an area have been generated, then the color quality for that area may be low. Accordingly, intraoral scan application 115 may flag an area for further scanning to receive additional color information for that area. In another example, surface quality (e.g., number of known points on a surface) may depend on a number of scans that have been received for that surface. With a few number of scans for a surface at a particular area, the area may be produced but with low certainty or low quality. Intraoral scan application 115 may flag such areas that have too few data points for further scanning. “factor”.)
Yaroslavsky discloses to generate the image including information ([0025] - the wavelength is varied to form a plurality of images at different depths.
[0049] - The combination of tissue staining and polarized light superficial imaging provides both strong contrast of the tumor in the image and depth resolution of approximately 150 .mu.m. Tumor margins are hardly visible in the conventional image, “non-observation”.
[0043] -Thus, the depth in the skin where the first back scattering event occurs is an adequate approximation for the thickness of the tissue layer, which contributes dominantly to the measured signal (imaging depth). This imaging depth, D, may be estimated if the scattering coefficient and the anisotropy factor of the tissue is known: D=1/(.mu..sub.s(1-g)). Depth involves non-observation and observation.
That is finding the tutor is based on the image depth, D which involves anisotropy factor) .
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko with to generate the image including information as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 7, Saphier in view of Natsuko disclose all the limitation of claim 1.
Saphier discloses detect by a learned machine learning model based the endoscopic image ([0037] - trained machine learning model that has been trained to identify restorative dental objects. [0146], [0665] – images are formed by stitching together in 3D in the intraoral scans, endoscopic prob. Mouth is interior organ).
Yaroslavsky discloses detect a lesion candidate area, which is an area estimated to be a lesion candidate ([0048] - dark area in the image clearly delineates lesion boundaries, which correlate well with the margins outlined by the surgeon in the image of histological slide of the same tumor (FIG. 6d)); and
generate the image including at least one of information indicating a total number of the lesion candidate areas, information indicating a position of the lesion candidate area, and information indicating a state of the lesion candidate area ([0048] - dark area in the image clearly delineates lesion boundaries, which correlate well with the margins outlined by the surgeon in the image of histological slide of the same tumor (FIG. 6d).
[0047] - the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images. ).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko with detect a lesion candidate area, which is an area estimated to be a lesion candidate as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 9, Saphier in view of Natsuko disclose all the limitation of claim 1.
Saphier disclose a missing area in the three-dimensional model ([0571] – missing are in 3D model)
Yaroslavsky discloses to detect, as the unobserved area, at least one of an area in the organ where observation by the endoscope camera is estimated to be difficult ( [0049] Example images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining. Conventional and superficial (I.sub.66.sup.620) images of the same specimen stained with TB are shown in FIGS. 7c and 7d. In both images the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images.
[0052], [0040] - FIG. 1. Such apparatus may be included in a single unit or such apparatus may be included in an endoscope, camera.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko with to detect, as the unobserved area, at least one of an area in the organ where observation by the endoscope camera is estimated to be difficult as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Regarding claim 10, Saphier in view of Natsuko disclose all the limitation of claim 9.
Saphier disclose area in the three-dimensional model ([0571] – missing are in 3D model);
where brightness is less than a predetermined value ([0662] During measurement, light (e.g., an array of light rays or beams) may be projected out of the anterior segment 3271. Accordingly, the light beams 3992 are reflected off of the dirty regions 3280, 3240, which provides a depth (z-axis) measurement of the points on the dirty regions 3280, 3240. Accordingly, points that have distances/depths that are less than a threshold may be identified as dirty points in embodiments.);
predetermined value ([0662] Accordingly, the light beams 3992 are reflected off of the dirty regions 3280, 3240, which provides a depth (z-axis) measurement of the points on the dirty regions 3280, 3240. Accordingly, points that have distances/depths that are less than a threshold may be identified as dirty points in embodiments.).
Yaroslavsky discloses an observation difficult area corresponds to at least one of the area in the endoscopic image where brightness is less than a value, where blurred amount is smaller than a value, and where a residue is present (
[0049] images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining.
[0052], [0040] - FIG. 1. Such apparatus may be included in a single unit or such apparatus may be included in an endoscope, camera.), and
wherein the processor is further configured to execute the instructions to detect an area corresponding to the observation difficult area, as the unobserved area (0049] Example images of the nodular and micronodular BCC acquired at the wavelength of 620 nm are shown in FIGS. 7a and 7b. Conventional and superficial images of unstained tissue are presented in FIGS. 7a and 7b, respectively. Tumor margins are hardly visible in the conventional image, while in the superficial image the tumor boundaries could be delineated even without staining. Conventional and superficial (I.sub.66.sup.620) images of the same specimen stained with TB are shown in FIGS. 7c and 7d. In both images the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images.
[0052], [0040] - FIG. 1. Such apparatus may be included in a single unit or such apparatus may be included in an endoscope, camera.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with an observation difficult area corresponds to at least one of the area in the endoscopic image where brightness is less than a value, where blurred amount is smaller than a value, and where a residue is present), and wherein the processor is further configured to execute the instructions to detect an area corresponding to the observation difficult area, as the unobserved area as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Claims 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Saphier et al. (Publication: US 2021/0321872 A1) in view of Natsuko et al. (Patent: US 6,252,599 B1), Yaroslavsky et al. (Publication: US 2004/0249274 A1), and Peleg et al. (Publication: US 2013/0070986 A1).
Regarding claim 8, Saphier in view of Natsuko disclose all the limitation of claim 7.
Saphier discloses to display an enlarged image when the lesion candidate image is [[clicked]] ([0424] - zooming in or out on certain areas of the 3D model(s)).
Yaroslavsky discloses wherein information indicating a state of the lesion candidate area is a lesion candidate image ([0048] - dark area in the image clearly delineates lesion boundaries, which correlate well with the margins outlined by the surgeon in the image of histological slide of the same tumor (FIG. 6d).
[0047] - the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images. ).), and
wherein the processor is further configured to execute the instructions to [[display an enlarged image]] of the lesion candidate area, or an image captured in the periphery of the lesion candidate area ([0048] - dark area in the image clearly delineates lesion boundaries, which correlate well with the margins outlined by the surgeon in the image of histological slide of the same tumor (FIG. 6d).
[0047] - the tumor is very dark and is easily identified. Comparison of the images in FIGS. 7b, 7c, and 7d with frozen H&E presented in FIG. 7e confirms that in general the location and the shape of the tumor were identified correctly in all the images. ).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with wherein information indicating a state of the lesion candidate area is a lesion candidate image, and wherein the processor is further configured to execute the instructions to [[display an enlarged image]] of the lesion candidate area, or an image captured in the periphery of the lesion candidate area as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Saphier in view of Natsuko and Yaroslavsky do not, Peleg discloses Image is clicked ([0048] – click on the GUI of a touchscreen.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with Image is clicked as taught by Peleg. The motivation for doing is to enhance convenience as taught by Peleg.
Regarding claim 11, Saphier in view of Natsuko disclose all the limitation of claim 9.
Yaroslavsky discloses wherein model which corresponds to at least one of an area hidden by a shield in the lumen organ and an area in which imaging by the endoscope camera ([0049] Tumor margins are hardly visible in the conventional image, “hidden area”, while in the superficial image the tumor boundaries could be delineated even without staining.The image in FIG. 7b is formed by the light that is remitted from all the depth of the tumor sample (.gtoreq.1 mm thick). Therefore the conventional image of stained tissue (FIG. 7b) provides a superior contrast, but does not allow any depth resolution, which makes the detailed comparison with 5 .mu.m thick H&E section (FIG. 7e) impossible. In contrast, the superficial imaging of the unstained tumor probes the layer of 150 .mu.m, “area hidden by a shield ”but obviously does not provide sufficient contrast to distinguish fine details of the imaged specimen. The combination of tissue staining and polarized light superficial imaging provides both strong contrast of the tumor in the image and depth resolution of approximately 150 .mu.m. Detailed comparison of the images in FIG. 7d with frozen H&E presented in FIG. 7e confirms that the number and location of the tumor lobules were identified correctly in the image I.sub..DELTA..sup.620 and proves that PLI enables imaging of the superficial tissue layer alone.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with wherein model which corresponds to at least one of an area hidden by a shield in the lumen organ and an area in which imaging by the endoscope camera as taught by Yaroslavsky. The motivation for doing is to enhance the image quality as taught by Yaroslavsky.
Saphier in view of Natsuko and Yaroslavsky do not, Peleg discloses
imaging by the device is not performed continuously for a predetermined time or more ([0053] - pauses the image stream at a certain image, processor 15 may generate a user-based indication of a suspected unhealthy situation/condition regarding that image for example in case the pause lasts more than a predetermined period of time, such as a second or a period less than a second, or a few seconds, or any suitable period of time according to specific settings.).
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art to modify Saphier in view of Natsuko and Yaroslavsky with imaging by the device is not performed continuously for a predetermined time or more as taught by Peleg. The motivation for doing is to enhance convenience as taught by Peleg.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ming Wu whose telephone number is (571)270-0724. The examiner can normally be reached on Monday - Friday: 9:30am - 6:00pm EST .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Copending Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MING WU/
Primary Examiner, Art Unit 2618