DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement filed on 05/24/2023 fails to comply with 37 CFR 1.98(a)(3)(i) because it does not include a concise explanation of the relevance, as it is presently understood by the individual designated in 37 CFR 1.56(c) most knowledgeable about the content of the information, of each reference listed that is not in the English language. It has been placed in the application file, but the information referred to therein has not been considered.
The Office Action issued on March 29, 2021 cited in the IDS dated 05/24/2023 is not in the English language and therefore is not been considered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: an observation image acquisition unit; a pre-processing unit, a lesion diagnosis unit, and a screen display control unit, in claim 1; the observation image acquisition unit in claims 2 and 3; the lesion diagnosis unit in claims 4 and 5; a pre-processing unit, a lesion area detection unit, and a screen display control unit, in claim 7; the pre-processing unit in claim 8; the lesion area detection unit in claim 9; a pre-processing unit, a lesion area detection unit, a lesion diagnosis unit and a screen display control unit, in claim 10; the pre-processing unit in claim 11; the lesion area detection unit in claim 12; the lesion diagnosis unit in claim 13; an insertion unit, an image signal processing unit, a display unit, an observation image acquisition unit, a pre-processing unit, a lesion diagnosis unit, and a screen display control unit in claim 14; an insertion unit, image sensing unit, an image signal processing unit, a display unit, a pre-processing unit, a lesion area detection unit, a lesion diagnosis unit, and a screen display control unit in claim 15; the lesion diagnosis unit in claims 16, 17 and 18.
In the specification, in paragraph 0045 (i.e., program data (application program) installed in a computer … executable in a main processor … program executable in the main processor), and in paragraphs 0052, 0056 and 0058, i.e., observation image acquisition unit 210, and/or in figure 2, i.e., observation image acquisition unit 210, is/are being interpreted to read on: an/the observation image acquisition unit, in claims 1, 2, 3, and 14.
In the specification, in paragraph 0045 (i.e., program data (application program) installed in a computer … executable in a main processor … program executable in the main processor), and in paragraphs 0053 and 0126, i.e., pre-processing unit 220, and/or in figure 2, i.e., pre-processing unit 220, is/are being interpreted to read on: a/the pre-processing unit, in claims 1, 7, 8, 10, 11, 14 and 15.
In the specification in pars 0045 and 0054 and in figure 2, i.e., lesion diagnoses unit 230, is/are being interpreted to read on: a/the lesion diagnosis unit, in claims 1, 4, 5, 10, 13, 14, 15, 16, 17 and 18.
In the specification in pars 0045 and 0055 and in figure 2, i.e., a screen display control unit, is/are being interpreted to read on: a/the screen display control unit, in claims 1, 7, 10, 14 and 15.
In figures 3 and 4 and the associated text in the specification, in pars 0045 and 0077, the lesion area detection unit 225, is/are being interpreted to read on: a/the lesion area detection unit, in claims 7, 9, 10, 12 and 15.
In pars 0124 and 0129, and in figures 1-2 and 3-4, i.e., [0124] One endoscopic equipment 100 may be constructed, as illustrated in FIG. 2, by further including, for example, in the system for diagnosing the image lesion (preferably understood as endoscope equipment) including the endoscope including an insertion unit inserted into a human body and an image sensing unit which is positioned within the insertion unit and senses light reflected from the human body to generate an endoscope image signal, the image signal processing unit for processing an endoscopic image signal captured by the endoscope into a displayable endoscopic image, and the display unit for displaying the endoscopic image, is/are being interpreted to read on: an insertion unit, image sensing unit, image signal processing unit and a display unit, in claims 14 and 15.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1, 4, 5, 6, 7, 9, 10, 12, 13, 14, 15 and 18 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by CHO et al. (US 2022/0031227 A1).
As to claim 1, CHO et al. discloses a system for diagnosing an image lesion (see figures 1-4), the system comprising:
an observation image acquisition unit (see figure 2, image acquisition part 11) configured to acquire an observation image from an input endoscopic image (see par 0010, i.e., The application of endoscopic imaging technologies such as narrow-band imaging, confocal imaging or magnifying techniques (so-called image-enhanced endoscopy) is also known to enhance diagnostic accuracy, see par 0013, i.e., collecting white-light gastroendoscopic images acquired by an endoscopic video imaging device, see par 0015, i.e., multiple image data in real time which is acquired when a doctor (user) examines the gastric tumor using an endoscopic device” and see pars 48, 51, 54, 55, 56, 57, and 0060, i.e., image acquisition part 11, [0060] The image acquisition part 11 may acquire a plurality of gastric lesion images. The image acquisition part 11 may receive gastric lesion images from an imaging device provided in the endoscopic device 20. The image acquisition part 11 may acquire gastric lesion images acquired with an endoscopic video imaging device (digital camera) used for gastroendoscopy. The image acquisition part 11 may collect white-light gastroendoscopic images of a pathologically confirmed lesion. Also, the image acquisition part 11 may receive a plurality of gastric lesion images from a plurality of hospitals' image storage devices and database systems. The plurality of hospitals' image storage devices may be devices that store gastric lesion images acquired during gastroendoscopy in multiple hospitals);
a pre-processing unit configured to pre-process an acquired observation image (see paragraphs 0059, and 70-76, i.e., preprocessing part 13, and see figure 2, data preprocessing part 13and see fig. 4 , preprocess dataset S403);
a lesion diagnosis unit configured to diagnose a degree of lesion on the pre-processed observation image using a pre-trained artificial neural network learning model for lesion diagnosis (see figure 2, i.e., lesion diagnostic part 15 and see figure 4, perform gastric lesion diagnosis S405 and see paragraph 0092, i.e., [0092] The lesion diagnostic part 15 may perform a gastric lesion diagnosis through an artificial neural network after passing a new dataset through a preprocessing process. In other words, the lesion diagnostic part 15 may derive a diagnosis on new data by using the final diagnostic model derived by the training part 14. The new data may include gastric lesion images based on which the user wants to make a diagnosis. The new dataset may be a dataset that is generated by linking gastric lesion images with patient information. The new dataset may be preprocessed such that it becomes applicable to a deep learning algorithm after passing through the preprocessing process of the preprocessing part 13 Afterwards, the preprocessed new dataset may be fed into the training part 14 to make a diagnosis with respect to the gastric lesion images based on training parameters.”); and
a screen display control unit configured to display and output a lesion diagnosis result (see par 0057, i.e., “The display device 23 may present the user gastroendoscopic images acquired from the endoscopic device 20 and information on a gastric lesion diagnosis made by the lesion diagnostic device 10. The display device 23 may include a touchscreen—for example, it may receive a touch, gesture, proximity, or hovering input using an electronic pen or a part of the user's body. The display device 23 may output gastroendoscopic images acquired from the endoscopic device 20. Also, the display device 23 may output gastric lesion diagnostic results”).
As to claim 4, CHO et al. discloses wherein the lesion diagnosis unit includes a pre-trained artificial neural network learning model for one or more lesions diagnosis in order to diagnose a degree of lesion for each of one or more endoscopic images among a gastric endoscope image, a small intestine endoscopy image, and a large intestine endoscopy image (see figure 2, i.e., lesion diagnostic part 15 and see figure 4, perform gastric lesion diagnosis S405 and see paragraph 0092, i.e., [0092] The lesion diagnostic part 15 may perform a gastric lesion diagnosis through an artificial neural network after passing a new dataset through a preprocessing process. In other words, the lesion diagnostic part 15 may derive a diagnosis on new data by using the final diagnostic model derived by the training part 14. The new data may include gastric lesion images based on which the user wants to make a diagnosis. The new dataset may be a dataset that is generated by linking gastric lesion images with patient information. The new dataset may be preprocessed such that it becomes applicable to a deep learning algorithm after passing through the preprocessing process of the preprocessing part 13 Afterwards, the preprocessed new dataset may be fed into the training part 14 to make a diagnosis with respect to the gastric lesion images based on training parameters.”, and see par 55, i.e., small or large intestine).
As to claim 5, CHO et al. discloses wherein the lesion diagnosis unit is configured to detect a lesion area in the pre-processed observation image using the pre-trained artificial neural network learning model for lesion diagnosis, and then diagnoses the degree of lesion for the detected lesion area (see figure 2, i.e., lesion diagnostic part 15 and see figure 4, perform gastric lesion diagnosis S405, and see paragraph 0092, i.e., [0092] The lesion diagnostic part 15 may perform a gastric lesion diagnosis through an artificial neural network after passing a new dataset through a preprocessing process. In other words, the lesion diagnostic part 15 may derive a diagnosis on new data by using the final diagnostic model derived by the training part 14. The new data may include gastric lesion images based on which the user wants to make a diagnosis. The new dataset may be a dataset that is generated by linking gastric lesion images with patient information. The new dataset may be preprocessed such that it becomes applicable to a deep learning algorithm after passing through the preprocessing process of the preprocessing part 13 Afterwards, the preprocessed new dataset may be fed into the training part 14 to make a diagnosis with respect to the gastric lesion images based on training parameters.”).
As to claim 6, CHO et al. discloses wherein the artificial neural network learning model for lesion diagnosis is configured to diagnose normal, low-grade dysplasia, high-grade dysplasia, early gastric cancer, and advanced gastric cancer on a gastric endoscopic image (see paragraphs 8, 16, 30, 55, 93, 101, and 0115, i.e., dysplasia, gastric dysplasia, gastric cancer, i.e., advanced gastric cancer, early gastric cancer, high-grade dysplasia, and low-grade dysplasia and see par 93, diagnose and classify gastric lesions as cancerous or non-cancerous, i.e., normal).
As to claim 7, CHO et al. discloses a system for diagnosing an image lesion (see figures 1-4), the system comprising:
a pre-processing unit configured to pre-process an input endoscopic image (see paragraphs 0059, and 70-76, i.e., preprocessing part 13, and see figure 2, data preprocessing part 13and see fig. 4 , preprocess dataset S403);
a lesion area detection unit configured to detect a lesion area in real time from the pre-processed endoscopic image frame using a pre-trained artificial neural network learning model for real-time lesion area detection (see figure 2, i.e., lesion diagnostic part 15 and see figure 4, perform gastric lesion diagnosis S405, and see par 0054, i.e., “acquired in real time”, and see paragraph 0092, i.e., [0092] The lesion diagnostic part 15 may perform a gastric lesion diagnosis through an artificial neural network after passing a new dataset through a preprocessing process. In other words, the lesion diagnostic part 15 may derive a diagnosis on new data by using the final diagnostic model derived by the training part 14. The new data may include gastric lesion images based on which the user wants to make a diagnosis. The new dataset may be a dataset that is generated by linking gastric lesion images with patient information. The new dataset may be preprocessed such that it becomes applicable to a deep learning algorithm after passing through the preprocessing process of the preprocessing part 13 Afterwards, the preprocessed new dataset may be fed into the training part 14 to make a diagnosis with respect to the gastric lesion images based on training parameters.”); and
a screen display control unit configured to display and output an endoscopic image frame in which the detected lesion area is marked (see par 0057, i.e., “The display device 23 may present the user gastroendoscopic images acquired from the endoscopic device 20 and information on a gastric lesion diagnosis made by the lesion diagnostic device 10. The display device 23 may include a touchscreen—for example, it may receive a touch, gesture, proximity, or hovering input using an electronic pen or a part of the user's body. The display device 23 may output gastroendoscopic images acquired from the endoscopic device 20. Also, the display device 23 may output gastric lesion diagnostic results”).
Independent claim 10, recites the same or similar claim limitations or features as discussed and addressed above in claims 1 and 7. Therefore, claim 10 is rejected for the same or similar reasons as discussed above in claims 1 and 7.
As to claim 14, CHO et al. discloses a system for diagnosing an image lesion (see figures 1-4) that includes an endoscope (see par 53) including an insertion unit (see par 53, body part 22 inserted into the body) inserted into a human body and an image sensing unit (see pars 53-54, i.e., imaging part and lighting part) which is positioned within the insertion unit (22) and senses light reflected from the human body (see par 53 lighting part) to generate an endoscope image signal, an image signal processing unit (see par 54, lesion diagnostic device 10) for processing an endoscopic image signal captured by the endoscope into a displayable endoscopic image, and a display unit (see par 57, display device 23) for displaying the endoscopic image, the system comprising. As the rest of the claim limitations, applicant is directed to the remarks and the discussion made in claim 1 above.
Independent claim 15, recites the same or similar claim limitations or features as discussed and addressed above in claims 10 and 14. Therefore, claim 15 is rejected for the same or similar reasons as discussed above in claims 10 and 14.
Regarding claim 9, recites the same or similar claim limitations or features as discussed and addressed above in claims 7 and 4. Therefore, claim 9 is rejected for the same or similar reasons as discussed above in claims 4 and 7.
Regarding claim 12, claim 12 recites the same or similar claim limitations or features as discussed and addressed above in claim 9. Therefore, claim 12 is rejected for the same or similar reasons as discussed above in claim 9.
Regarding claim 13, claim 13 recites the same or similar claim limitations or features as discussed and addressed above in claim 4. Therefore, claim 13 is rejected for the same or similar reasons as discussed above in claim 4.
Regarding claim 18, claim 18 recites the same or similar claim limitations or features as discussed and addressed above in claim 5. Therefore, claim 18 is rejected for the same or similar reasons as discussed above in claim 5.
Allowable Subject Matter
Claims 2, 3, 8, 11, 16 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 2, the closest prior art of record, namely, CHO et al. (US 2022/0031227 A1), discussed above, does not disclose, teach or suggest, wherein the observation image acquisition unit is configured to acquire, as the observation image, image frames whose inter-frame similarity exceeds a predetermined threshold among frames of the endoscopic image, as recited in claim 2.
Claim 16 is objected to because claim 16 is dependent on objected to claim 2 discussed above.
Regarding claim 3, the closest prior art of record, namely, CHO et al. (US 2022/0031227 A1), discussed above, does not disclose, teach or suggest, wherein the observation image acquisition unit is configured to capture and acquire the endoscopic image as an observation image when an electric signal generated according to a machine freeze operation of an endoscope equipment operator is input, as claimed in claim 3.
Claim 17 is objected to because claim 17 is dependent on objected to claim 3 discussed above.
Regarding claim 8, the closest prior art of record, namely, CHO et al. (US 2022/0031227 A1), discussed above, does not disclose, teach or suggest, wherein the pre-processing unit is configured to recognize and remove blood, text, and biopsy instruments from the endoscopic image in frame units, as recited in claim 8.
Regarding claim 11, the closest prior art of record, namely, CHO et al. (US 2022/0031227 A1), discussed above, does not disclose, teach or suggest, wherein the pre-processing unit is configured to recognize and remove blood, text, and biopsy instruments from an endoscopic image frame, as claimed in claim 11.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
HORIUCHI et al. (US 2023/0162356 A1) teaches endoscope capturing apparatus (200), diagnostic imaging device, and an estimation unit uses a CNN to learn using gastric cancer images and non-gastric cancer images as training data to estimate the presence of gastric cancer in the acquired endoscopic video image (see the abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DOV POPOVICI whose telephone number is (571)272-4083. The examiner can normally be reached Monday - Friday 8:00 am- 4:30 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M. Sarpong can be reached at 571-270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DOV POPOVICI/Primary Examiner, Art Unit 2681