DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This is an initial office action in response to communication(s) filed on February 27, 2024.
Claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on May 20, 2024 and June 19, 2024 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows:
Claim 19 is drawn to “a computer readable storage medium”. As claims should be given their broadest reasonable interpretation consistent with the specification1. The broadest reasonable interpretation of a claim drawn to “computer readable storage medium” typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of “computer readable storage medium”2; particularly in instant case, as evidenced by applicant’s specification, i.e. in Specification, i.e. in para. 77, recites that “… Computer-readable storage medium 700 may comprise any non-transitory computer-readable storage medium or machine-readable storage medium, such as an optical, magnetic or semiconductor storage medium. In various embodiments, computer-readable storage medium 700 may comprise an article of manufacture….”, which merely provides several exemplary instances of “computer readable storage medium” and does not constitutes as an explicit definition for controlling the scope of claimed “computer readable storage medium”, therefore, the scope of such limitation, by broadest reasonable interpretation and consistent with the specification, encompasses the “computer readable storage medium” to be covering the transitory medium (i.e. propagating signals, carrier waves, etc.) Also see MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 US.C. § 101 as covering non-statutory subject matter3. Therefore, by broadest reasonable interpretation consistent with the specification provided by applicant, the claimed “computer readable storage medium” is considered to cover, inter alia, carrier waves, signal and wireless media (acoustic, RF, infrared, etc.), which are considered to be non-statutory subject matter under 35 U.S.C. 101 for at least the following reasons (For suggestions for amendment, see the memo of “Subject Matter Eligibility of Computer Readable Media”2).
A claim which is directed to a “signal (or carrier wave)” modulated/encoded/embodied on a carrier wave/etc. with functional descriptive material. While functional descriptive material may be claimed as a statutory product (i.e., a “manufacture”) when embodied on a tangible computer readable medium, a “signal” per se does not fall within any of the four statutory classes of 35 U.S.C. §101. A “signal” is not a process because it is not a series of steps per se. Furthermore, a “signal” is not a “machine”, “composition of matter” or a “manufacture” because these statutory classes “relate to structural entities and can be grouped as ‘product’ claims in order to contrast them with process claims.” (1 D. Chisum, Patents § 1.02 (1994)). Machines, manufactures and compositions of matter are embodied by physical structures or material, whereas a “signal” has neither a physical structure nor a tangible material. That is, a “signal” is not a “machine” because it has no physical structure, and does not perform any useful, concrete and tangible result. Likewise, a “signal” is not a “composition of matter” because it is not “matter”, but rather a form of energy. Finally, a “signal” is not a “manufacture” because all traditional definitions of a “manufacture” have required some form of physical structure, which a claimed signal does not have. A “manufacture” is defined as “the production of articles for use from raw materials or prepared materials by giving to these materials new forms, qualities, properties, or combinations, whether by hand-labor or by machinery.” 4. Therefore, a “signal (or a carrier wave)” is considered non-statutory because it is a form of energy, in the absence of any physical structure or tangible material, that does not fall within any of the four statutory classes of 35 U.S.C. §101 (also see MPEP 2106 for further details).
In addition, instant issue(s) discussed in the abovementioned claim(s) has also affected and/or found similarly in the corresponding dependent claim 20, of which, do no further limit the claimed invention or aspect to statutory subject matter, therefore are also rejected similarly for at least the rationale(s) set forth in claims discussed above under 35 U.S.C. §101 for the ground of claiming the non-statutory subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-10 and 12-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gupta et al. (Multi-class motion-based semantic segmentation for ureteroscopy and laser lithotripsy”, Computer Medical Imaging and Graphics, Pergamon Press, New York, NY US, vol. 101, 8, August 2022 (2022-08-08), XP087197933, a copy of Gupta et al. has already placed in the record, hereinafter as “Gupta et al.”).
With regard to claim 1, the claim is drawn to a method of endoscopic imaging (see Gupta et al., i.e. in Section 3, para. 1 and etc.), comprising:
receiving, from an endoscopic probe while it is deployed, imaging data of a visual field including one or more renal calculi and one or more surgical instruments (see Gupta et al., i.e.in Section 3.1, para. 1, wherein the images are received during deployment of the endoscopic probe in vitro and in vivo);
analyzing the received imaging data using a segmentation neural network (see Gupta et al., i.e.in Section 3.2, wherein the segmentation neural network corresponds to the end-to-end CNN model comprising the HybResUNet and the DVFNet);
generating, from the network analysis, a classification of the visual field into spatial regions, wherein a first spatial region is classified as one or more renal calculi and a second, distinct spatial region is classified as one or more surgical instruments (see Gupta et al., i.e.in Section 3.2; also see Section 4.5, fig. 5 and etc.); and
based on the classification of the visual field, modifying a display of the imaging data provided during deployment of the endoscopic probe (see Gupta et al., i.e.in Fig. 5, wherein the original imaging data provided during deployment of the endoscopic probe, see first column, is displayed as a segmentation mask, see last column of fig. 5 as illustrated);
wherein the segmentation neural network analyzes the imaging data based on machine learning of training data performed using a loss function with both region-based and contour-based loss components (see Gupta et al., i.e.in Section 3.2.3, wherein the focal loss, LFL, is a region-based loss, and the boundary loss, Lboundary, is a contour-based loss).
With regard to claim 2, the claim is drawn to the method of claim 1, wherein the segmentation neural network machine learning is performed using a deformation vector field network (see Gupta et al., i.e. in Section 3.2.2, the teachings of DVFNet, and also see the details of fig. 4).
With regard to claim 3, the claim is drawn to the method of claim 2, wherein the loss function used in the machine learning of the segmentation neural network further includes a cross-correlation loss component from one or more warped images generated by the deformation vector field network (see Gupta et al., i.e. in Section 3.2.3, cross-correction loss, Lsim).
With regard to claim 4, the claim is drawn to the method of claim 2, wherein the loss function used in the machine learning of the segmentation neural network further includes a smoothing component from one or more deformation vector field maps generated by the deformation vector field network (see Gupta et al., i.e. in Section 3.2.3, smoothness constraint, Lsmo).
With regard to claim 5, the claim is drawn to the method of claim 2, wherein the deformation vector field network is an encoding-decoding neural network having both linear and non-linear convolution layers (see Gupta et al., i.e. in Section 3.2.2, disclosure of DVFNet).
With regard to claim 6, the claim is drawn to the method of claim 1, wherein the machine learning further includes augmentation performed on the training data (see Gupta et al., i.e. in Section 4.2, disclose the teachings of data augmentation).
With regard to claim 7, the claim is drawn to the method of claim 6, wherein the data augmentation includes two or more of the following applied stochastically to images within the training data: horizonal flip, vertical flip, shift scale rotate, sharpen, Gaussian blur, random brightness contrast, equalize, and contrast limited adaptive histogram equalization (CLAHE) (see Gupta et al., i.e. in Section 4.2, data augmentation, further disclose different strategies on segmentation accuracy, i.e. performed 8 training experiments for both in vitro and in vivo datasets).
With regard to claim 8, the claim is drawn to the method of claim 7, wherein the data augmentation includes random brightness contrast and at least one of equalize and CLAHE applied stochastically to images within the training data (see Gupta et al., i.e. in Section 4.2, disclose RBC (Random Brightness Contract), CLAHE (Contrast Limited Adaptive Histogram Equalization) and etc.).
With regard to claim 9, the claim is drawn to the method of claim 1, wherein the one or more surgical instruments comprise a laser fiber (see Gupta et al., i.e. in Section 4.5, disclose that, Fig. 5 shows that our model outperforms the existing approaches by overcoming the challenges and providing a more accurate delineation of stone and laser fiber…).
With regard to claim 10, the claim is drawn to the method of claim 1, wherein the endoscopic probe is deployed during a lithotripsy procedure (see Gupta et al., i.e in section 3.1, para. 1 and etc.).
With regard to claim 12, the claim is drawn to the method of claim 1, further comprising: while the endoscopic probe is still deployed, receiving additional imaging data; generating an updated classification of the visual field based on the additional imaging data; and further modifying the display of the imaging data based on the updated classification (see Gupta et al., i.e. in Section 4.6, extended out-of-sample assessment, and also see fig. 6, which disclose the qualitative analysis of our proposed method (HybResUNet+DVFNet (with warped image)) for in vivo against existing SOTA methods on our unseen in vivo test dataset).
With regard to claim 13, the claim is drawn to the method of claim 1, wherein modifying the display of the imaging data comprises adding one or more properties of the one or more renal calculi to the display (see Gupta et al., in fig. 5-7, last column, wherein, e.g. the shape of the renal calculi is shown in the segmentation mask).
With regard to claim 14, the claim is drawn to a computing system, comprising: a processor; a display; and memory comprising instructions, which when executed by the processor cause the computing system to: receive, from an endoscopic probe while it is deployed, imaging data of a visual field including one or more renal calculi and one or more surgical instruments; analyze the received imaging data using a segmentation neural network; generate, from the network analysis, a classification of the visual field into spatial regions, wherein a first spatial region is classified as one or more renal calculi and a second, distinct spatial region is classified as one or more surgical instruments; and based on the classification of the visual field, modify a display of the imaging data provided on the display during deployment of the endoscopic probe; wherein the segmentation neural network analyzes the imaging data based on machine learning of training data performed using a loss function with both region-based and contour-based loss components (instant claim is similarly rejected for at least the rationales set forth in discussions of claim 1 above, also incorporated by reference herein; in addition, in Gupta et al, i.e. in section 4.7, further disclose a computer system).
With regard to claim 15, the claim is drawn to the computing system of claim 14, wherein the segmentation neural network machine learning is performed using a deformation vector field network (see Gupta et al., i.e. in Section 3.2.2, the teachings of DVFNet, and also see the details of fig. 4).
With regard to claim 16, the claim is drawn to the computing system of claim 14, wherein the loss function used in the machine learning of the segmentation neural network further includes a cross-correlation loss component from one or more warped images generated by the deformation vector field network (see Gupta et al., i.e. in Section 3.2.3, cross-correction loss, Lsim).
With regard to claim 17, the claim is drawn to the computing system of claim 14, wherein the loss function used in the machine learning of the segmentation neural network further includes a smoothing component from one or more deformation vector field maps generated by the deformation vector field network (see Gupta et al., i.e. in Section 3.2.3, smoothness constraint, Lsmo).
With regard to claim 18, the claim is drawn to the computing system of claim 14, wherein the deformation vector field network is an encoding-decoding neural network having both linear and non-linear convolution layers (see Gupta et al., i.e. in Section 3.2.2, disclosure of DVFNet).
With regard to claim 19, the claim is drawn to a computer readable storage medium comprising instructions, which when executed by a processor of a computing device cause the processor to: receive, from an endoscopic probe while it is deployed, imaging data of a visual field including one or more renal calculi and one or more surgical instruments; analyze the received imaging data using a segmentation neural network; generate, from the network analysis, a classification of the visual field into spatial regions, wherein a first spatial region is classified as one or more renal calculi and a second, distinct spatial region is classified as one or more surgical instruments; and based on the classification of the visual field, modify a display of the imaging data provided during deployment of the endoscopic probe; wherein the segmentation neural network analyzes the imaging data based on machine learning of training data performed using a loss function with both region-based and contour-based loss components (instant claim is similarly rejected for at least the rationales set forth in discussions of claims 1 and 14 above, also incorporated by reference herein; in addition, in Gupta et al, i.e. in section 4.7, further disclose a computer system, and further the computer inherently require certain type of memory and/or computer readable storage medium to carry out the instructions).
With regard to claim 20, the claim is drawn to the computer readable storage medium of claim 19, wherein the segmentation neural network machine learning is performed using a deformation vector field network (see Gupta et al., i.e. in Section 3.2.2, the teachings of DVFNet, and also see the details of fig. 4).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Gupta et al. as applied to claim 1 above, and further in view of Cherubini et al. (U.S. Pub. No. 2023/0050833 A1).
With regard to claim 11, the claim is drawn to the method of claim 10, wherein the display of the modified imaging data occurs during the lithotripsy procedure and is provided to a medical practitioner to assist in the ongoing lithotripsy procedure.
The teachings of Gupta et al. merely lacks in explicitly disclose the aspect relating to “wherein the display of the modified imaging data occurs during the lithotripsy procedure and is provided to a medical practitioner to assist in the ongoing lithotripsy procedure”.
However, Cherubini et al., disclose an analogous invention related to system and methos for contextual image analysis (see Cherubini, abstract and etc.). More specifically, in para. 73, discloses that “[0073] As depicted in FIG. 1, overlay device 105 may augment the video received from image device 103 and then transmit the augmented video to a display device 107. In some embodiments, the augmentation may comprise providing one or more overlays for the video, as described herein. As further depicted in FIG. 1, overlay device 105 may be configured to relay the video from image device 103 directly to display device 107. For example, overlay device 105 may perform a direct relay under predetermined conditions, such as when there is no augmentation or overlay to be generated. Additionally, or alternatively, overlay device 105 may perform a direct relay if operator 101 inputs a command to overlay device 105 to do so. The command may be received via one or more buttons included on overlay device 105 and/or through an input device such as a keyboard or the like. In cases where there is video modification or one or more overlay(s), overlay device 105 may create a modified video stream to send to display device. The modified video may comprise the original image frames with the overlay and/or classification information to displayed to the operator via display device 107. Display device 107 may comprise any suitable display or similar hardware for displaying the video or modified video. Other types of video modifications (e.g., a zoomed image of the at least one object, a modified image color distribution, etc.) are described herein”.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta et al. to include the limitation(s) discussed and also taught by Cherubini et al., with the aspect(s) discussed above, as the cited prior arts are at least considered to be analogous arts if not also in the same field of endeavor relating to image processing arts. Further, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gupta et al. by the teachings of Cherubini et al., and to incorporate the limitation(s) discussed and also taught by Cherubini et al., thereby “…as described herein, embodiments of the present disclosure provide such detections and classification information efficiently and when needed, thereby preventing the display from becoming overcrowded with unnecessary information” (see Cherubini et al., i.e. para. 19-20 and etc.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Sousa Ferreira et al. (U.S. Pat/Pub No. 2024/0354949 A1) disclose an invention relates to lesion detection and classification in medical image data. More particularly, to automated identification of pancreatic cystic lesions in images/videos acquired during endoscopic ultrasonography, also known as endoscopic ultrasound imagery, to assess the lesion seriousness and subsequent medical treatment.
The Art Unit (or Workgroup) location of your application in the USPTO has changed. To aid in correlating any papers for this application, all further correspondence regarding this application should be directed to Art Unit 2681.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacky X. Zheng whose telephone number is (571) 270-1122. The examiner can normally be reached on Monday - Friday, 9:00 am - 5:00 pm, alt. Friday Off.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached on (571) 272-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACKY X ZHENG/Primary Examiner, Art Unit 2681
1 See In re Zletz, 893 F. 2d 319. (Fed. Cir. 1989).
2 In reference to the memorandum issued on January 26, 2010, titled "Subject Matter Eligibility of Computer Readable Media", also published in Official Gazette of the USPTO on February 23, 2010 (http://www.uspto.gov/web/offices/com/sol/og/2010/week08/TOC.htm#ref20), and also see the provided suggestion for amendment discussed therein.
3 See In re Nuijten, 500 F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for evaluating Subject Matter Eligibility Under 35 Us. C. § 101, Aug. 24, 2009; p. 2.
4 See In re Diamond v. Chakrabarty, 447 U.S. 303, 308, 206 USPQ 193, 196-97 (1980).