DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claim 1, 12-13 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 8 and 9 of copending Application No. 18/840,361 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the current claim subject matter is inside of the independent claims of application number 18/840,361 as seen below in the chart.
Current application claims 18/840,361 application claims
(Currently amended) An image processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: acquire an endoscopic image obtained by photographing an examination target; a detection, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; a first determination determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and a second determination determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth.
As can be seen the entire subject matter is inside of claim 1 of the copending application claim 1, claim 1 of copending application does have additional subject matter. The highlighted portions are the same subject matter.
1. An image processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: acquire an endoscopic image obtained by photographing an examination target; detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth; output, to a display device or audio output device, a suggestion regarding photography of the endoscopic image, upon determining that the endoscopic image is not the image adequate to determine the degree of progression or the invasion depth; and output, to the display device or the audio, information that the degree of progression cannot be determined, upon determining that inadequacy determination results are consecutively generated for a predetermined number of times or for a predetermined period without generation of an adequacy determination result even after the output of the suggestion.
12. (Original) An image processing method executed by a computer, the image processing method comprising: acquiring an endoscopic image obtained by photographing an examination target; detecting, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determining whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and determining the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth.
Same logic applied as in claim 1, the subject matter inside of claim 12 of current application is in claim 8 of the copending application.
8. An image processing method executed by a computer, the image processing method comprising: acquiring an endoscopic image obtained by photographing an examination target; detecting, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determining whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; determining the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth; and outputting, to a display device or audio output device, a suggestion regarding photography of the endoscopic image, upon determining that the endoscopic image is not the image adequate to determine the degree of progression or the invasion depth; and outputting, to the display device or the audio, information that the degree of progression cannot be determined, upon determining that inadequacy determination results are consecutively generated for a predetermined number of times or for a predetermined period without generation of an adequacy determination result even after the output of the suggestion.
13. (Currently amended) A non-transitory computer readable storage medium storing a program executed by a computer, the program causing the computer to: acquire an endoscopic image obtained by photographing an examination target; detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth.
Same logic applied as in claim 1, the subject matter inside of claim 13 of current application is in claim 9 of the copending application.
9. A non-transitory computer readable storage medium storing a program executed by a computer, the program causing the computer to: acquire an endoscopic image obtained by photographing an examination target; detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth; and output, to a display device or audio output device, a suggestion regarding photography of the endoscopic image, upon determining that the endoscopic image is not the image adequate to determine the degree of progression or the invasion depth; and output, to the display device or the audio, information that the degree of progression cannot be determined, upon determining that inadequacy determination results are consecutively generated for a predetermined number of times or for a predetermined period without generation of an adequacy determination result even after the output of the suggestion.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim 1, and 12-13 provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1, 8 and 9 of copending Application No. 18/410,293 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the current claim limitations are inside of the copending application 18/410,293.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Current application claims 18/410,293 application claims
1. (Currently amended) An image processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: acquire an endoscopic image obtained by photographing an examination target; a detection, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; a first determination determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and a second determination determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth.
As can be seen the entire subject matter is inside of claim 1 of the copending application claim 1, claim 1 of copending application does have additional subject matter. The highlighted portions are the same subject matter.
1. An image processing device comprising: at least one memory configured to store instructions; and at least one processor configured to execute the instructions to: acquire an endoscopic image obtained by photographing an examination target; detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth; and in case where it is determined that the endoscopic image is not adequate to determine the degree of progression of the lesion, display a frame indicating a preferable display range of the lesion region and superimposed on the endoscopic image.
12. (Original) An image processing method executed by a computer, the image processing method comprising: acquiring an endoscopic image obtained by photographing an examination target; detecting, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determining whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and determining the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth.
Same logic applied as in claim 1, the subject matter inside of claim 12 of current application is in claim 8 of the copending application.
8. An image processing method executed by a computer, the image processing method comprising: acquiring an endoscopic image obtained by photographing an examination target; detecting, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determining whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; determining the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth; and in case where it is determined that the endoscopic image is not adequate to determine the degree of progression of the lesion, displaying a frame indicating a preferable display range of the lesion region and superimposed on the endoscopic image.
13. (Currently amended) A non-transitory computer readable storage medium storing a program executed by a computer, the program causing the computer to: acquire an endoscopic image obtained by photographing an examination target; detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; and determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth.
Same logic applied as in claim 1, the subject matter inside of claim 13 of current application is in claim 9 of the copending application.
9. A non-transitory computer readable storage medium storing a program executed by a computer, the program causing the computer to: acquire an endoscopic image obtained by photographing an examination target; detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image; determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth, based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion; determine the degree of progression or the invasion depth, based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth; and in case where it is determined that the endoscopic image is not adequate to determine the degree of progression of the lesion, display a frame indicating a preferable display range of the lesion region and superimposed on the endoscopic image.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-13 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Saito (US 2022/0020496).
As per claims 1, and 12-13 Saito teaches, an image processing device, a non-transitory computer readable storage medium storing and method comprising: at least one memory configured to store instructions (Saito, ¶[0186] “a storage unit (a hard disk or a semiconductor memory)” this represents at least one memory ); and at least one processor configured to execute the instructions to: acquire an endoscopic image obtained by photographing an examination target (Saito, ¶[0173] “In the CNN system according to the first embodiment, INTEL's Core i7-7700K was used as the CPU, and NVIDEA's GeForce GTX 1070 was used as the graphics processing unit (GPU).” This represents processing unit and ¶[0055] “and the probability corresponding to the site where the image is captured, for test data including the plurality of the endoscopic images of the digestive organ of a large number of subjects, not only an endoscopy specialist is enabled to perform check and make corrections easily, but also it becomes possible to simplify the tasks of creating a collection of images that are associated with a disease.” Would be acquiring or photographic multiple images of examination target such as in fig.5, an endoscopic image see ¶[0045] “ a diagnostic assistance system, a diagnostic assistance program, and a computer-readable recording medium storing therein the diagnostic assistance program for a disease based on an endoscopic image of a digestive organ, being capable of correctly identifying, for example, not only the cases of H. pylori positives and negatives but also cases after H. pylori eradication, using an endoscopic image of the digestive organ with use of a CNN system.” This represents single image ); detect, based on the endoscopic image, a lesion region which is a candidate region of a lesion of the examination target in the endoscopic image (Saito, ¶[0086] “With the diagnostic assistance method for a disease based on an endoscopic image of a digestive organ with use of a CNN system according to the seventeenth aspect of the present invention, it becomes possible to detect the presence of a pharyngeal cancer at a high sensitivity and a high accuracy during a normal esophagogastroduodenoscopy endoscopic examination.” This represents lesion region which is a candidate region “detect the presence of a pharyngeal cancer” is the lesion and ¶[0053-54] “ the trained CNN system outputs at least one of a probability of the positivity and/or the negativity for the disease in the digestive organ, a probability of the past disease, the severity level of the disease” severity level is also “a candidate” as it could be a severity of 0 for example, as well as the negative as a candidate right before it is determined negative result); determine whether or not the endoscopic image is an image adequate to determine a degree of progression or an invasion depth (Saito, ¶[0054] “the trained CNN system outputs at least one of a probability of the positivity and/or the negativity for the disease in the digestive organ, a probability of the past disease, the severity level of the disease, the invasion depth of the disease, and a probability corresponding to the site where the image is captured, based on a second endoscopic image of the digestive organ.” an invasion depth being detected is being represented by “invasion depth of the disease” ), based on at least one of a size of the lesion region and/or a degree of reliability regarding a probability of the lesion region as the lesion (Saito, ¶[0054] “the trained CNN system outputs at least one of a probability of the positivity and/or the negativity for the disease in the digestive organ” probability of the positivity and/or the negativity represents the degree of reliability); and determine the degree of progression or the invasion depth (Saito, ¶ [0053] “at least one final diagnosis result of the positivity or the negativity for the disease in the digestive organ, a severity level, or an invasion depth of the disease, the final diagnosis result being corresponding to the first endoscopic image, in which” This represents determine the degree of invasion depth), based on the endoscopic image determined to be the image adequate to determine the degree of progression or the invasion depth (Saito, ¶[0054] “the invasion depth of the disease, and a probability corresponding to the site where the image is captured, based on a second endoscopic image of the digestive organ.” The probability of there the image is captured represents “adequate to determine the degree of progression” because if the image is of something completely different then this is not adequate).
As per claim 2, Saito teaches, the image processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to change a criterion to be used for determining whether or not the endoscopic image is an image adequate to determine the degree of progression or the invasion depth, based on information obtained by determining the degree of progression or the invasion depth (Saito, ¶[0187] “and because an enormous number of data processing is performed when the CNN program is run, it is preferable to run the processes in parallel, and to have a large-capacity storage unit.” This represents having a processor, and ¶[0290] “The endoscope systems used included high-resolution or high-definition upper gastrointestinal endoscopes (GIF-XP290N, GIF-Q260J, GIF-RQ260Z, GIF-FQ260Z, GIF-Q240Z, GIF-H290Z, GIF-H290, GIF-HQ290, and GIF-H260Z; manufactured by Olympus Corporation, Tokyo, Japan) and video processors (CV260; manufactured by Olympus Corporation), high-definition magnification gastrointestinal endoscopes (GIF-H290Z, GIF-H290, GIF-HQ290, GIF-H260Z: manufactured by Olympus Corporation) and video processors (EVIS LUCERA CV-260/CLV-260, and EVIS LUCERA ELITE CV-290/CLV-290SL; manufactured by Olympus Medical Systems Corp.), and high-resolution endoscopes (EG-L590ZW, EG-L600ZW, and EG-L600ZW7; manufactured by FUJIFILM Corporation, Tokyo, Japan) and a video endoscope system (LASEREO: manufactured by FUJIFILM Corporation).” Represents having the processor).
As per claim 3, Saito teaches, the image processing device according to claim 2, wherein the at least one processor is configured to execute the instructions to change the criterion, based on a degree of confidence for a class of the degree of progression or the invasion depth (Saito, ¶[0238] “The AUC of the trained CNN system according to the third embodiment which detected an erosion/ulcer was 0.960 (with a 95% confidence interval [CI], 0.950 to 0.969; see FIG. 11).” This represents the degree of confidence).
As per claim 4, Saito teaches, the image processing device according to claim 3, wherein the at least one processor is configured to execute the instructions to change the criterion, based on a degree of change in the class, determined in time series, of the degree of progression or the invasion depth (Saito, ¶[0030] “so that a WCE image analysis with such a number of images requires an intense attention and concentration for a time period of 30 to 120 minutes on the average.” This represents determined in time series).
As per claim 5, Saito teaches, the image processing device according to claim 3, wherein the criterion is at least one of a first criterion regarding the size of the lesion region and/or a second criterion regarding the degree of reliability (Saito, ¶[0175] “A value with the maximum value among these three probability scores was selected as the seemingly most reliable “diagnosis made by the CNN”.” This represents degree of reliability).
As per claim 6, Saito teaches, the image processing device according to claim 1, wherein the at least one processor is configured to further execute the instructions to output, by a display device or audio output device, a suggestion regarding photography of the endoscopic image, upon determining that the endoscopic image is not the image adequate to determine the degree of progression or the invasion depth (Saito, ¶ [0040] “Therefore, in order for a practitioner to perform the CS and to detect abnormality, it is necessary to correctly recognize the anatomical parts of the colon via a CS image.” The incorrect image represents the not the image adequate to determine the degree of progression or invasion depth and ¶[0118] “and the trained CNN program displays in the second image an invasion depth of a squamous cell carcinoma as the disease.” Therefore, this information is displayed).
As per claim 7, Saito teaches, the image processing device according to claim 6, wherein the at least one processor is configured to execute the instructions to output information indicating a target range of the lesion region on the endoscope image (Saito, ¶ [0218]” In other words, it becomes difficult to capture the shape of a lumen when the endoscope is moved closer to the surface of a site, or when the lumen is not sufficiently filled with the air.” The shape represents the target range and ¶[0175] “[0175] The trained/validated CNN system according to the first embodiment outputs a probability score (PS) within a range between 0 and 1, as diagnosis results” represents a target range ).
As per claim 8, Saito teaches, the image processing device according to claim 7, wherein the at least one processor is configured to execute the instructions to determine at least one of a shape of the target range and/or a size of the target range, based on a detection result of the lesion region Saito, ¶ [0218]” In other words, it becomes difficult to capture the shape of a lumen when the endoscope is moved closer to the surface of a site, or when the lumen is not sufficiently filled with the air.” This represents determine at least one of a shape of the target range).
As per claim 9, Saito teaches, the image processing device according to claim 6, wherein the at least one processor is configured to execute the instructions to output information prompting a photographing position to approach the lesion region as the suggestion (Saito, ¶[0220] “the CNN system can recognize the position for colonoscopic images more correctly.” This represents information prompting a photographing position to approach the lesion region as the suggestion, to have images more correctly).
As per claim 10, Saito teaches, the image processing device according to claim 1, wherein the at least one processor is configured to execute the instructions to acquire an inference result regarding the lesion region outputted from a lesion detection model by inputting the endoscopic image to the lesion detection model (Saito, ¶[0173] “A trained model trained with the features of natural images in ImageNet was used as initial values at the time of the start of the training.” This represents implementing the detection model), and wherein the lesion detection model is a model obtained by machine learning of a relation between an input image to the lesion detection model and the lesion region included in the input image (Saito, ¶[0034] “Deep learning enables a neural network with a plurality of layers stacked to learn high-order features of input data. Deep learning also enables a neural network to update internal parameters that are used in calculating a representation at each layer from the representation at the previous layer, using a back-propagation algorithm, by instructing how the apparatus should make changes.” This would represent the machine learning model by having the neural network).
As per claim 11, Saito teaches, the image processing device according to claim 6, wherein the at least one processor is configured to execute the instructions to output the suggestion to assist examiner's decision making (Saito, ¶[0181] “Furthermore, with diagnoses of H. pylori using the CNN according to this first embodiment, because a result can be immediately obtained if an endoscopic image in an endoscopic examination is inputted, it is possible to provide completely “online” assistance to the diagnoses of H. pylori, and therefore, it becomes possible to solve the problem of heterogeneity of the distribution of medical doctors across the regions, by providing what is called “remote medical cares”. This is assisting examiners decision making and ¶[0182] “ by using this CNN with an enormous number of images in storage, screening of a H. pylori infection can be assisted greatly without evaluations of endoscopic examiners,” ).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SANTIAGO GARCIA whose telephone number is (571)270-5182. The examiner can normally be reached Monday-Friday 9:30am-5:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chineyere Wills-Burns can be reached at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SANTIAGO GARCIA/Primary Examiner, Art Unit 2673
/SG/