DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
.
Claim Objections
Claims 1, 8, 11 are objected to because of the following informalities:
Claims 1, 11 discloses “displays the second biological image in a boundary area between a valid screen and an invalid screen or in an area of the invalid screen”
In Specification paragraph 10, it states “The marker may be displayed at least two or more in at least one of the boundary areas between the valid screen and the invalid screen”. However the claim states that the “second biological image” is displayed between the valid and invalid screen area. Therefore it should be amended to “displays the [[second biological image]] marker in a boundary area between a valid screen and an invalid screen or in an area of the invalid screen”.
Claim 8 discloses “the second marker” but there lacks antecedent for the second marker.
Appropriate correction is required.
Specification
The disclosure is objected to because of the following informalities: Throughout the Specification it states “displays the second biological image in a boundary area between a valid screen and an invalid screen or in an area of the invalid screen”. However the marker and not the second biological image is displayed between the valid/invalid screen area.
Appropriate correction is required.
Drawings
The drawings are objected to because in Fig. 8 step s850 it states “displaying the second bio image in a boundary area between an effective screen and a non-effective screen”. However the marker and not the second bio image is displayed between the valid/invalid screen area. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 3-5, 9, 11, 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210000327 to Kitamura in view of US 20190125306 to Oh.
Regarding claim 1, Kitamura discloses an apparatus for displaying a biological image comprising (paragraph 50-51; Fig. 2 displays object in bio image):
a processor (paragraph 35-37; control section/ CPU);
a memory that is communicatively coupled to the processor and stores one or more sequences of instructions, which when executed by the processor causes steps to be performed comprising (paragraph; 36-37; memory stores program coupled to CPU which is executed to perform the method):
extracting a lesion information from a first biological image that has been continuously captured over time for a target object based on a machine learning (paragraph 40, 111-112; extracting lesion candidate information IL from first image from endoscope of target object using a discriminator based on machine learning such as deep learning; paragraph 115; endoscopic images of “plurality of frames sequentially outputted” reads on “continuously captured over time”); and
generating a second biological image including a marker for displaying the lesion information by image processing the first biological image (paragraph 123-126; in s33 when there is restriction for putting marker in main screen image MG, it generates second image shown in Fig. 14 having marker MM3 for displaying the lesion region information; paragraph 43; highlighting processing section 133A performs image processing to add marker on image),
a display unit that displays the second biological image in a boundary area between a valid screen and an invalid screen or in an area of the invalid screen (paragraph 126; second image in Fig. 14 is displayed with marker MM3 on “outside of the main screen MG” which is invalid screen area where there is no tissue image).
However Kitamura does not disclose displaying a tissue of a biological image and extracting a lesion information from a first biological image for a target object based on a machine learning model.
Oh discloses displaying a tissue of a biological image (paragraph 60, 121; tissue in image for target object) and extracting a lesion information from a first biological image for a target object based on a machine learning model (paragraph 148-149, 189, 205; data recognition model (DNN) trained via learning data is used to detect lesion information such as location of lesion in image for target).
It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Kitamura as taught by Oh to provide machine learning model for extracting lesion from image.
The motivation to combine the references is to provide model using the neural network that is trained by the learning data to provide classification of image to be used for inferring abnormality (paragraph 149-150).
Regarding claim 3, Kitamura discloses the apparatus of claim 1, wherein the lesion information is a size of a lesion within the valid screen of the display unit (paragraph 60, 102; lesion candidate information IL1 includes size of lesion; lesion candidate region is within the main screen MG (valid screen)).
Regarding claim 4, Kitamura discloses the apparatus of claim 1, wherein the marker is displayed at least two or more in at least one of the boundary areas between the valid screen and the invalid screen or the area of the invalid screen of the display unit (paragraph 126; second image in Fig. 14 is displayed with marker MM3 on “outside of the main screen MG” which is invalid screen area where there is no tissue image; in Fig. 14 it shows two or more marks MM3 in invalid area of screen outside of MG).
Regarding claim 5, Kitamura discloses the apparatus of claim 1, wherein the marker is a first marker that indicates a location of a lesion (paragraph 126; second image in Fig. 14 is displayed with marker MM3 to indicate location of lesion).
Regarding claim 9, Oh discloses the apparatus of claim 1, wherein the marker is a third marker that indicates a presence or an absence of the lesion (paragraph 234, 241-242; markers 920, 1020 indicate normal (absence) or abnormal (presence) of lesions).
Regarding claim 11, Kitamura discloses a method for displaying a biological image (paragraph 50-51; Fig. 2 displays object in bio image), comprising:
extracting a lesion information from a first biological image that has been continuously captured over time for a target object based on a machine learning (paragraph 40, 111-112; extracting lesion candidate information IL from first image from endoscope of target object using a discriminator based on machine learning such as deep learning; paragraph 115; endoscopic images of “plurality of frames sequentially outputted” reads on “continuously captured over time”);
generating a second biological image including a marker for displaying the lesion information by image processing the first biological image (paragraph 123-126; in s33 when there is restriction for putting marker in main screen image MG, it generates second image shown in Fig. 14 having marker MM3 for displaying the lesion region information; paragraph 43; highlighting processing section 133A performs image processing to add marker on image); and
displaying the second biological image in a boundary area between a valid screen and an invalid screen or in an area of the invalid screen (paragraph 126; second image in Fig. 14 is displayed with marker MM3 on “outside of the main screen MG” which is invalid screen area where there is no tissue image).
However Kitamura does not disclose displaying a tissue of a biological image and
extracting a lesion information from a first biological image for a target object based on a machine learning model.
Oh discloses displaying a tissue of a biological image (paragraph 60, 121; tissue in image for target object) and extracting a lesion information from a first biological image for a target object based on a machine learning model (paragraph 148-149, 189, 205; data recognition model (DNN) trained via learning data is used to detect lesion information such as location of lesion in image for target).
It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Kitamura as taught by Oh to provide machine learning model for extracting lesion from image.
The motivation to combine the references is to provide model using the neural network that is trained by the learning data to provide classification of image to be used for inferring abnormality (paragraph 149-150).
Regarding claim 12, Kitamura discloses the method of claim 11, wherein the lesion information is at least one of a presence, size, or location of the lesion (paragraph 60, 102; lesion candidate information IL1 includes size of lesion; lesion candidate region is within the main screen MG (valid screen)).
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210000327 to Kitamura in view of US 20190125306 to Oh further in view of JP 2008289916 to Goto.
Regarding claim 2, Kitamura discloses the apparatus of claim 1, wherein the lesion information is position of the lesion within the valid screen of the display unit (paragraph 39).
Kitamura does not disclose the apparatus of claim 1, wherein the lesion information is 2-dimension or 3-dimension coordinates of the lesion within the valid screen of the display unit.
Goto discloses wherein the lesion information is 2-dimension or 3-dimension coordinates of the lesion (paragraph 42-43; lesion information such as 3D coordinate of lesion is stored).
It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Kitamura as taught by Goto to provide coordinate information of the lesion in the image.
The motivation to combine the references is to provide storage of coordinate data of the lesion information such that it can be retrieved from memory at any time and used to display the markers (paragraph 272).
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210000327 to Kitamura in view of US 20190125306 to Oh further in view of US 20150078615 to Staples.
Regarding claim 6, Kitamura does not disclose the apparatus of claim 5, wherein the first marker moves depending on the movement of the lesion.
Staples discloses wherein the first marker moves depending on the movement of the lesion (paragraph 26, 44; when lesion moves, the marking 302 also moves).
It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Kitamura as taught by Staples to provide movement of markers in association with lesion movement.
The motivation to combine the references is to provide tracking of the lesion when there is movement of lesion and motion estimation to estimate the relation of lesions that are missing in the view (paragraph 26).
Claim(s) 7, 8, 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 20210000327 to Kitamura in view of US 20190125306 to Oh further in view of JP H0935043 to Kondo.
Regarding claim 7, Kitamura does not disclose the apparatus of claim 1,
wherein the marker is a second marker that indicates a size of the lesion.
Kondo discloses wherein the marker is a second marker that indicates a size of the lesion (paragraph 3, 54; the symbol indicates the size of abnormal shadow (lesion)).
It would have been obvious to one of ordinary skill in the art at the time of the invention was made to modify the system of Kitamura as taught by Kondo to provide marker indicating the size of lesion.
The motivation to combine the references is to provide visual representation of the lesion size graphically such as to make it easier to readily determine the size of lesion based on size of marker (paragraph 54).
Regarding claim 8, Kondo discloses the apparatus of claim 1, wherein a size of the second marker changes depending on the size of the lesion (paragraph 5, 54; size of symbols (second marker) changes based on size of abnormality (lesion)).
Regarding claim 10, Kondo discloses the apparatus of claim 9, wherein at least one of a brightness, color, or width of the third marker changes depending on the size of the lesion (paragraph 54; size of symbols (third marker) changes based on size of abnormality (lesion); size change results in width change).
Other Prior Art Cited
14. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20210073990 to Jeong.
US 20190223790 to Yoo.
US 20210274999 to Kubota.
US 12070356 to Oh.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BENIYAM MENBERU whose telephone number is (571) 272-7465. The examiner can normally be reached on Monday-Friday, 10:00am-6:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on (571) 270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Any inquiry of a general nature or relating to the status of this application or proceeding should be directed to the customer service office whose telephone number is (571) 272-2600. The group receptionist number for TC 2600 is (571) 272-2600.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
For more information about the PAIR system, see <http://pair-direct.uspto.gov/>. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Patent Examiner
Beniyam Menberu
/BENIYAM MENBERU/Primary Examiner, Art Unit 2681
09/18/2025