DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-7 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-7 of copending Application No. 18/417,144 to Sawada (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because Sawada claims the claimed invention as follows:
Claim 1 Sawada
1. An object detection apparatus comprising: a memory storing instructions; at least one processor configured to execute the instructions to:
acquire one or more images, the one or more images including a main image;
calculate a first map from the main image with use of a first model;
detect an object with reference to at least the first map;
determine whether a background image is present;
in a case where the background image is present, calculate, with use of a second model, a second map from the background image or from both the main image and the background image; and
in a case where the background image is present, detect the object with reference to not only the first map but also the second map.
1. A lesion detection apparatus comprising: a memory storing instructions; at least one processor configured to execute the instructions to:
acquire one or more images captured by endoscopic examination, the one or more images including a first image;
calculate a first map from the first image with use of a first model;
detect a lesion with reference to at least the first map;
determine whether a second image that is captured by a past endoscopic examination is present,
in a case where the second image is present, calculate, with use of a second model, a second map from the second image or from both the first image and the second image;
in a case where the second image is present, detect the lesion with reference to both the first map and the second map;
Claim 2 Sawada
2. The object detection apparatus according to claim 1, wherein the at least one processor is further configured to execute the instructions to:
in a case where the background image is present, detect the object with reference to a third map obtained by multiplying the first map by the second map.
2. The lesion detection apparatus according to claim 1, wherein the at least one processor is further configured to execute the instructions to:
in a case where the second image is present, detect the lesion with reference to a third map obtained by multiplying the first map by the second map.
Claim 3 Sawada
3. The object detection apparatus according to claim 1, wherein the determining comprises referring to a flag indicating whether the main image is present or whether the main image and the background image are present.
3. The lesion detection apparatus according to claim 1, wherein the determining comprises referring to a flag indicating whether the first image is present or whether the first image and the second image are present.
Claim 4 Sawada
4. The object detection apparatus according to claim 1, wherein the at least one processor is further configured to execute the instructions to:
acquire training data which includes at least one main image, at least one background image, and label information indicative of an object included in the at least one main image;
train the first model by machine learning with reference to the at least one main image and the label information which are included in the training data; and
train the first model and the second model by machine learning with reference to the at least one main image, the at least one background image, and the label information which are included in the training data.
4. The lesion detection apparatus according to claim 1, wherein the at least one processor is further configured to execute the instructions to:
acquire training data which includes at least one first image, at least one second image, and label information indicative of a lesion included in the at least one first image;
train the first model by machine learning with reference to the at least one first image and the label information which are included in the training data; and
train the first model and the second model by machine learning with reference to the at least one first image, the at least one second image, and the label information which are included in the training data.
Claim 5 Sawada
5. The object detection apparatus according to claim 1, wherein the at least one processor further is configured to execute the instructions to:
detect the object that is a lesion which is capable of being detected from an image captured by carrying out an endoscopic examination with respect to a subject, and
output a result of detection of the lesion for supporting decision making by a medical worker, the result being obtained by the detecting.
1. A lesion detection apparatus comprising: a memory storing instructions; at least one processor configured to execute the instructions to:
acquire one or more images captured by endoscopic examination …
display a detection result of the lesion to an output device.
5. The lesion detection apparatus according to claim 1, wherein the detection result of the lesion supports decision making by a medical worker.
Claim 6 Sawada
6. An object detection method comprising:
acquiring one or more images, the one or more images including a main image;
calculating a first map from the main image with use of a first model;
detecting an object with reference to at least the first map;
determining whether a background image is present;
in a case where the background image is present, calculating, with use of a second model, a second map from the background image or from both the main image and the background image; and
in a case where the background image is present, detecting the object with reference to not only the first map but also the second map.
6. A lesion detection method comprising:
acquiring one or more images captured by endoscopic examination, the one or more images including a first image;
calculating a first map from the first image with use of a first model;
detecting a lesion with reference to at least the first map;
determining whether a second image that is captured by a past endoscopic examination is present …
in a case where the second image is present, calculating, with use of a second model, a second map from the second image or from both the first image and the second image;
in a case where the second image is present, detecting the lesion with reference to both the first map and the second map
Claim 7 Sawada
7. A non-transitory tangible computer-readable storage medium storing therein an object detection program causing a computer execute the processing comprising:
acquiring one or more images, the one or more images including a main image;
calculating a first map from the main image with use of a first model;
detecting an object with reference to at least the first map;
determining whether a background image is present;
in a case where the background image is present, calculating, with use of a second model, a second map from the background image or from both the main image and the background image; and
in a case where the background image is present, detecting the object with reference to not only the first map but also the second map.
7. A non-transitory tangible computer-readable storage medium storing therein a lesion detection program causing a computer to execute the processing comprising:
acquiring one or more images captured by endoscopic examination, the one or more images including a first image;
calculating a first map from the first image with use of a first model;
detecting a lesion with reference to at least the first map;
determining whether a second image that is captured by a past endoscopic examination is present …
in a case where the second image is present, calculating, with use of a second model, a second map from the second image or from both the first image and the second image;
in a case where the second image is present, detecting the lesion with reference to both the first map and the second map
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 3-7 are rejected under 35 U.S.C. 103 as being unpatentable over Ogino et al. (U.S. Pub. No. 2021/0272277) in view of Hosoya et al. (U.S. Pub. No. 2019/0313883).
As to claims 1, 6, and 7, Ogino et al. teaches an object detection apparatus (i.e., “medical imaging apparatus”, Abstract)/object detection method/ non-transitory tangible computer-readable storage medium (i.e., “Data and programs required for processing of the image processing unit 200 are stored in the storage device 130”, Paragraph [0048]) storing therein an object detection program causing a computer execute the processing comprising:
a memory storing instructions (i.e., “Data and programs required for processing of the image processing unit 200 are stored in the storage device 130”, Paragraph [0048]); at least one processor (i.e., “image processing unit 200”, Paragraph [0043]) configured to execute the instructions to:
acquire/acquiring one or more images, the one or more images including a main image (i.e., “image reconstructing unit 210 that reconstructs an image (first image) from the image signal received from the imaging unit 100”, Paragraph [0043]);
calculate/calculating a first map from the main image with use of a first model (i.e., “feature quantity extraction unit 232 that extracts a first feature quantity A from first image data”, Paragraph [0044]; and Paragraphs [0045] and [0071]);
detect/detecting an object with reference to at least the first map (i.e., “an identification unit 235 that calculates a predetermined parameter value using the third feature quantity C using an identification model and performs prediction”, Paragraph [0044]; and “This model calculates a predetermined parameter value from a feature quantity after conversion, and predicts the presence or absence of a lesion site, malignancy, etc. represented by the parameter value”, Paragraph [0072]);
in a case where a background image is present (i.e., “plurality of types of MRI images”, Paragraph [0121]), calculate/calculating, with use of a second model (i.e., model used for obtaining feature quantities from different type of image), a second map from the background image or from both the main image and the background image (i.e., “images (such as a diffusion weighted image) having different image quality parameters are input, and the feature quantity A is extracted for each patch in each image”, Paragraph [0121]); and
in a case where the background image is present, detect/detecting the object with reference to not only the first map but also the second map (i.e., “The feature quantity abstraction unit 233 inputs the feature quantity (the number of images×the number of patches) obtained by fusing the feature quantities A1 to A4 output from each feature quantity extraction unit 232, and outputs one feature quantity B”, Paragraph [0122]; and “A process after obtaining the feature quantity B is similar to that in the first embodiment … In this way, with respect to the second image, the feature quantity C in which the feature of the lesion, that is the diagnosis target, is appropriately extracted can be obtained”, Paragraph [0123]).
However, Ogino et al. does not explicitly disclose the at least one processor configured to execute the instructions to: determine/determining whether a background image is present.
Hosoya et al. teaches at least one processor (i.e., “management server 10”, Paragraph [0016]) configured to execute the instructions to: determine/determining whether a background image is present (i.e., “determining whether or not an endoscopic RAW image is similar to an endoscopic image for which an abnormal finding has been confirmed in the past when performing a compression process on the endoscopic RAW image and adding predetermined information to the image that has been compressed when the endoscopic RAW image is determined to be similar”, Paragraph [0021]).
Ogino et al. and Hosoya et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to modify Ogino et al. by incorporating the determining of whether a background image is present, as taught by Hosoya et al.
The suggestion/motivation for doing so would have been to allow for image observation to be efficiently performed.
Therefore, it would have been obvious to combine Hosoya et al. with Ogino et al. to obtain the invention as specified in claims 1, 6, and 7.
As to claim 3, Hosoya et al. teaches wherein the determining comprises referring to a flag indicating whether the main image is present or whether the main image and the background image are present (i.e., “compression processing unit 40 adds information indicating analysis results provided from the bleeding state determination unit 34, the group identification unit 36, and the similarity determination unit 38 to the compressed image. More specifically, to the compressed image having an image ID provided from the bleeding state determination unit 34, the compression processing unit 40 adds information indicating that the image is a bleeding image. This information may be added as flag information”, Paragraph [0040]).
As to claim 4, Ogino et al. teaches wherein the at least one processor is further configured to execute the instructions to:
acquire training data which includes at least one main image (i.e., “input image”, Paragraph [0053]), at least one background image (i.e., “a different image from the input image”, Paragraph [0053]), and label information indicative of an object included in the at least one main image (i.e., “a label such as the presence or absence (benign or malignant) of a lesion or a grade of lesion malignancy as learning data”, Paragraph [0054]);
train the first model by machine learning with reference to the at least one main image and the label information which are included in the training data (i.e., “The CNN of this predictive model is learned to extract a feature quantity A 410 for accurately identifying the presence or absence of the lesion of an input image 400 by the CNN repeating the convolution calculation and pooling on input data of the input image 400 for learning divided into a plurality of patches by the patch processing unit 231”, Paragraph [0055]; and “Learning is performed until an error between an output and teacher data falls within a predetermined range”, Paragraph [0056]); and
train the first model and the second model by machine learning with reference to the at least one main image, the at least one background image, and the label information which are included in the training data (See for example, Paragraphs [0065]-[0066]; and “A process of obtaining the learning feature quantity C using such a CNN may be performed as a process in the image processing unit 200 (diagnosis support processing unit 230)”, Paragraph [0067]).
As to claim 5, Ogino et al. teaches wherein the at least one processor further is configured to execute the instructions to: detect the object that is a lesion which is capable of being detected from an image captured by carrying out an examination with respect to a subject (i.e., “a description will be given of the case where an image input to the diagnosis support processing unit is an image acquired the MRI apparatus. However, the invention is not limited thereto. For example, other modality images of the CT, X-rays, ultrasonic waves, etc. may be input”, Paragraph [0119]), and
output a result of detection of the lesion for supporting decision making by a medical worker, the result being obtained by the detecting (i.e., “processing result of the diagnosis support processing unit 230 may be output to the output unit 120 provided in the image processing apparatus 20, or may be sent to the medical imaging apparatus to which the image data is sent, a facility in which the medical imaging apparatus is placed, a database in another medical institution, etc.”, Paragraph [0128]; and Paragraph [0138]).
However, Ogino et al. does not explicitly disclose the examination is an endoscopic examination.
Hosoya et al. teaches Ogino et al. teaches the examination is an endoscopic examination (i.e., “endoscopic image observation support system”, Paragraph [0014]).
Therefore, in view of Hosoya et al., it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ogino et al. by incorporating the examination as an endoscopic examination, as taught by Hosoya et al., because there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Ogino et al. in view of Hosoya et al. as applied to claim 1 above, and further in view of Sawada et al. (U.S. Pub. No. 2023/0410532). The teachings of Ogino et al. and Hosoya et al. have been discussed above.
As to claim 2, Ogino et al. and Hosoya et al. do not explicitly disclose wherein the at least one processor is further configured to execute the instructions to: in a case where the background image is present, detect the object with reference to a third map obtained by multiplying the first map by the second map.
Sawada et al. teaches at least one processor that is configured to execute the instructions to (i.e., Paragraph [0077]): in a case where the background image is present (i.e., more than one image, Paragraph [0197]), detect the object with reference to a third map obtained by multiplying the first map by the second map (i.e., “the object detection device 200 according to the first embodiment includes the image data acquiring unit 21 that acquires image data indicating an image captured by the camera 1, the first feature amount extracting unit 22 that generates the first feature map FM1 using the image data, the second feature amount extracting unit 23 that generates the second feature map FM2 using the image data, and generates the third feature map FM3 by performing addition or multiplication of the second feature map FM2 using the first feature map FM1 and weighting the second feature map FM2, and the object detection unit 24 that detects an object in the captured image using the third feature map FM3”, Paragraph [0206]).
Ogino et al., Hosoya et al. and Sawada et al. are analogous art because they are from the field of digital image processing for object detection.
Before the effective filing date of the claimed invention, it would have been obvious to a person of ordinary skill in the art to further modify Ogino et al. and Hosoya et al. by incorporating the detection of the object with reference to a third map obtained by multiplying the first map by the second map in a case where the background image is present, as taught by Sawada et al.
The suggestion/motivation for doing so would have been to cope with variations in size of individual objects to be detected in an image.
Therefore, it would have been obvious to combine Sawada et al. with Ogino et al. and Hosoya et al. to obtain the invention as specified in claim 2.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSE M TORRES whose telephone number is (571)270-1356. The examiner can normally be reached Monday thru Friday; 10:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSE M TORRES/Examiner, Art Unit 2664 01/08/2026