Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is responsive to the amendment as filed on 10/6/2025.
This action is made Final.
Claims 1 – 20 are pending in the case. Claims 1, 11, and 20 are independent claims. Claims 1, 3, 5, 7, 11, 13, 15, 17 and 20.
Response to Arguments
Applicant's arguments filed 10/6/2025 have been fully considered but they are not persuasive. Applicant remarks, regarding the rejection of claims 1-8, 11-18 and 20, that the amendments made to claims 1, 11 and 20 overcome the pending 101 rejections of the claims (pages 8-10). The Examiner disagrees. More specifically, the additional feature of “a slide analysis tool” merely adds another generic component to the ineligible subject matter. Applicant is directed to the maintained 101 rejection wherein the Examiner further articulates how the amended claims remain rejected. Applicant further remarks that Reicher does not disclose every feature of newly amended claims 1, 11 and 20 (pages 10-11). The Examiner disagrees. Applicant is directed to the updated rejection of the claims necessitated by the amendment.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8, 11-18 and 20 remain rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Claims 1, 11 and 20 recite at least “receiving, via a slide analysis tool, a plurality of medical images of at least one pathology specimen, the pathology specimen being associated with a patient; receiving, via the slide analysis tool, a gross description, the gross description including data about the medical images, the data including information about a size, a shape, or an appearance of the at least one pathology specimen based on an examination of the plurality of medical images; extracting, via the slide analysis tool, data from the gross description; determining, using a machine learning system of the slide analysis tool, at least one associated location on the medical images for one or more pieces of data extracted; and outputting, via a display, a visual indication of the gross description data displayed in relation to the medical images”. These limitations are construed as abstract ideas for being performable in the human mind and/or on paper. A human can certainly receive a medical images, receive gross descriptions including data about the medical images, extract data from the gross descriptions, determining an associated location on the medical images for the piece of extracted data, and outputting a visual indication of the gross description data related to the medical image, especially if the medical images and gross description are on paper.
This judicial exception is not integrated into a practical application because the additional limitations of “slide analysis tool”, “machine learning system”, “memory”, “processor” and “a display” (from claims 1 and 11; no additional limitations in claim 20) are merely generic computing components on which the instructions to implement the abstract idea are applied. Additional limitations directed toward mere instructions to apply the exception to generic computing components, alone or in combination, do not integrate the judicial exception into a practical application (See MPEP§2106.05(f)).
As per using ML technology for data processing limitations, said steps are nothing more than an attempt to recycle preexisting artificial intelligence or machine-learning (AI/ML) technologies to apply for narrative flow prediction. There are no improvements in said ML techniques, such as advances in the field of computer science itself, or designing a new neural network, and there is no controlling of a technological process using the outcome of said AI/ML operations.
Further, looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually; there is no indication that the combination of elements improves the functioning of a computer or improves any other technology including AI/ML technology, - their collective functions merely provide conventional computer implementation. None of the additional elements "offers a meaningful limitation beyond generally linking 'the use of the [method] to a particular technological environment,' that is, implementation via computers." Alice Corp., slip op. at 16 (citing Bilski v. Kappos, 561 U.S. 610, 611 (U.S. 2010)).
The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements identified above, being directed toward mere instructions to apply the exception to generic computing components, alone or in combination, are well-understood routine and conventional, do not provide an inventive concept, and thus, do not amount to significantly more than the judicial exception. Therefore, independent claims 1, 11 and 20 are directed toward ineligible subject matter.
Dependent claims 2-10, 12-19 recite additional limitations that are also construed as additional abstract ideas, mere instructions to apply the judicial exception to generic computing components, or insignificant extra solution activity, and are, therefore, also directed toward ineligible subject matter.
The analysis of dependent claims 2-10 and 12-19 has resulted in the determination that these claims recite eligible subject matter.
Claim Rejections - 35 USC § 112
The rejections of claims 3, 4, 13 and 14 have been withdrawn as necessitated by the amendment.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-9 and 11-20 remain rejected under 35 U.S.C. 102(a)(1) as being anticipated by Reicher (USPUB 20160364862 A1).
Claim 1:
Reicher discloses A computer-implemented method for processing electronic medical images, comprising: receiving, via a slide analysis tool, a plurality of medical images of at least one pathology specimen, the pathology specimen being associated with a patient; receiving, via the slide analysis tool, a gross description, the gross description including data about the medical images, the data including information about a size, a shape, or an appearance of the at least one pathology specimen based on an examination of the plurality of medical images; extracting, via the slide analysis tool, data from the gross description; determining, using a machine learning system of the slide analysis tool, at least one associated location on the medical images for one or more pieces of data extracted; and outputting, via a display, a visual indication of the gross description data displayed in relation to the medical images (Fig 17, 0081, 0149-150: by simultaneously weighing many factors, the learning engine 110 may automatically mark a lesion on an image and generate a corresponding report or notification that indicates influences on the marked lesions, such as “Lesion has decreased in size since date X; likely reasons include surgical debulking on date Y, and chemotherapy on date Z.”...The model may extract pathology location information from one or more pathology reports and map each extracted pathology location to an associated image and, optionally, a location within the associated image. Similarly, in some embodiments, the learning engine 110 locates the electronic pathology result by deducting semantics from text included in a pathology report or searching for a field within a structured pathology report... The learning engine 110 also generates an electronic correlation between the biopsy location and the electronic pathology result (at block 1508). For example, in some embodiments, the learning engine 110 generates the electronic correlation by generating a table or other data structure that maps the biopsy location to the electronic pathology result. As illustrated in FIG. 15, the learning engine 110 also displays the image with the biopsy location marked within the image (at block 1510). For example, FIG. 16 illustrates an image 1600 with two marked biopsy locations 1602. As illustrated in FIG. 16, the image 1600 provides a user, such as a radiologist with an image (e.g., a three-dimensional image) of an organ with associated biopsied locations. In some embodiments, when a user selects the marked biopsy location 1602, the learning engine 110 automatically displays the corresponding electronic pathology result based on the electronic correlation. In some embodiments, the learning engine 110 displays the electronic pathology result within the associated pathology report (see, e.g., the example pathology report 1700 illustrated in FIG. 17). Also, in some embodiments, the learning engine 110 displays other pathology reports with the associated pathology report, such as past pathology reports, future pathology reports, or a combination thereof, to provide a user with a timeline of pathology results... In some embodiments, a user may enable or disable the marked biopsy locations 1602, such as using configurable preferences associated with the user. Also, in some embodiments, a user can specify preferences for characteristics of the marked biopsy locations 1602 (e.g., size, shape, color, orientation, etc.).).
Claims 2 and 12:
Reicher discloses determining if the gross description is structured or unstructured; upon determining that the gross description is structured, providing the gross description to a rule-based Al system; and upon determining the gross description is unstructured, providing the gross description to a natural language processing based machine learning system (0056-59).
Claims 3 and 13:
Reicher discloses receiving a corresponding radiologic image associated with the patient; and determining a sample location of the medical images relative to the radiologic image (0056 and 0060).
Claims 4 and 14:
Reicher discloses displaying the sample location of the medical image relative to the radiologic image (0056 and 0060).
Claims 5 and 15:
Reicher discloses receiving a corresponding three-dimensional figure associated with the patient; and determining a sample location of the medical images relative to the three- dimensional figure (0148-150).
Claims 6 and 16:
Reicher discloses comparing the associated location of the data on the medical images with an external system, wherein any discrepancies are marked (0121).
Claims 7 and 17:
Reicher discloses determining that diseased tissue is present in two or more of the plurality of medical images; and determining a location of the diseased tissue in three-dimensions based on the determined location of diseased tissue within the medical images (0093-95).
Claims 8 and 18:
Reicher discloses estimating an area and/or volume of the diseased tissue discloses (0071).
Claims 9 and 19:
Reicher discloses determining a new coordinate system for measurement data of lesions within the medical images (0050 and 0060).
Claim 11:
Reicher discloses A system for processing electronic medical images, the system comprising: at least one memory storing instructions; and at least one processor configured to execute the instructions to perform operations comprising: receiving, via a slide analysis tool, a plurality of medical images of at least one pathology specimen, the pathology specimen being associated with a patient; receiving, via the slide analysis tool, a gross description, the gross description including data about the medical images, the data including information about a size, a shape, or an appearance of the at least one pathology specimen based on an examination of the plurality of medical images; extracting, via the slide analysis tool, data from the gross description; determining, using a machine learning system of the slide analysis tool, at least one associated location on the medical images for one or more pieces of data extracted;and outputting, via a display, a visual indication of the gross description data displayed in relation to the medical images (Fig 17, 0081, 0149-150: by simultaneously weighing many factors, the learning engine 110 may automatically mark a lesion on an image and generate a corresponding report or notification that indicates influences on the marked lesions, such as “Lesion has decreased in size since date X; likely reasons include surgical debulking on date Y, and chemotherapy on date Z.”...The model may extract pathology location information from one or more pathology reports and map each extracted pathology location to an associated image and, optionally, a location within the associated image. Similarly, in some embodiments, the learning engine 110 locates the electronic pathology result by deducting semantics from text included in a pathology report or searching for a field within a structured pathology report... The learning engine 110 also generates an electronic correlation between the biopsy location and the electronic pathology result (at block 1508). For example, in some embodiments, the learning engine 110 generates the electronic correlation by generating a table or other data structure that maps the biopsy location to the electronic pathology result. As illustrated in FIG. 15, the learning engine 110 also displays the image with the biopsy location marked within the image (at block 1510). For example, FIG. 16 illustrates an image 1600 with two marked biopsy locations 1602. As illustrated in FIG. 16, the image 1600 provides a user, such as a radiologist with an image (e.g., a three-dimensional image) of an organ with associated biopsied locations. In some embodiments, when a user selects the marked biopsy location 1602, the learning engine 110 automatically displays the corresponding electronic pathology result based on the electronic correlation. In some embodiments, the learning engine 110 displays the electronic pathology result within the associated pathology report (see, e.g., the example pathology report 1700 illustrated in FIG. 17). Also, in some embodiments, the learning engine 110 displays other pathology reports with the associated pathology report, such as past pathology reports, future pathology reports, or a combination thereof, to provide a user with a timeline of pathology results... In some embodiments, a user may enable or disable the marked biopsy locations 1602, such as using configurable preferences associated with the user. Also, in some embodiments, a user can specify preferences for characteristics of the marked biopsy locations 1602 (e.g., size, shape, color, orientation, etc.).).
Claim 20:
Reicher discloses A non-transitory computer-readable medium storing instructions that, when executed by a processor, perform operations processing electronic medical images, the operations comprising: receiving, via a slide analysis tool, a plurality of medical images of at least one pathology specimen, the pathology specimen being associated with a patient; receiving, via the slide analysis tool, a gross description, the gross description including data about the medical images, the data including information about a size, a shape, or an appearance of the at least one pathology specimen based on an examination of the plurality of medical images; extracting, via the slide analysis tool, data from the gross description; determining, using a machine learning system of the slide analysis tool, at least one associated location on the medical images for one or more pieces of data extracted; andoutputting, via a display,a visual indication of the gross description data displayed in relation to the medical images (Fig 17, 0081, 0149-150: by simultaneously weighing many factors, the learning engine 110 may automatically mark a lesion on an image and generate a corresponding report or notification that indicates influences on the marked lesions, such as “Lesion has decreased in size since date X; likely reasons include surgical debulking on date Y, and chemotherapy on date Z.”...The model may extract pathology location information from one or more pathology reports and map each extracted pathology location to an associated image and, optionally, a location within the associated image. Similarly, in some embodiments, the learning engine 110 locates the electronic pathology result by deducting semantics from text included in a pathology report or searching for a field within a structured pathology report... The learning engine 110 also generates an electronic correlation between the biopsy location and the electronic pathology result (at block 1508). For example, in some embodiments, the learning engine 110 generates the electronic correlation by generating a table or other data structure that maps the biopsy location to the electronic pathology result. As illustrated in FIG. 15, the learning engine 110 also displays the image with the biopsy location marked within the image (at block 1510). For example, FIG. 16 illustrates an image 1600 with two marked biopsy locations 1602. As illustrated in FIG. 16, the image 1600 provides a user, such as a radiologist with an image (e.g., a three-dimensional image) of an organ with associated biopsied locations. In some embodiments, when a user selects the marked biopsy location 1602, the learning engine 110 automatically displays the corresponding electronic pathology result based on the electronic correlation. In some embodiments, the learning engine 110 displays the electronic pathology result within the associated pathology report (see, e.g., the example pathology report 1700 illustrated in FIG. 17). Also, in some embodiments, the learning engine 110 displays other pathology reports with the associated pathology report, such as past pathology reports, future pathology reports, or a combination thereof, to provide a user with a timeline of pathology results... In some embodiments, a user may enable or disable the marked biopsy locations 1602, such as using configurable preferences associated with the user. Also, in some embodiments, a user can specify preferences for characteristics of the marked biopsy locations 1602 (e.g., size, shape, color, orientation, etc.).).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 10 remains rejected under 35 U.S.C. 103 as being unpatentable over Reicher in view of Timms (USPUB 20090029362 A1).
Claim 10:
Reicher teaches every feature of claim 1.
Reicher, by itself, does not seem to completely teach inferring genomic characteristics about a tumor based on data describing one or more alternative tumors within the patient.
The Examiner maintains that these features were previously well-known as taught by Timms.
Timms teaches inferring genomic characteristics about a tumor based on data describing one or more alternative tumors within the patient (0056: The method of this embodiment involves (1) obtaining, from an individual, genomic DNA from a first and second separate tumor; and (2) determining the copy number profile for genomic DNA from the first and second tumors. If the copy number profile of the genomic DNA from the first tumor and the copy number profile of the genomic DNA from the second tumor are substantially similar, this indicates that the first and second tumors are likely to be a primary and metastasis. More specifically, if the second tumor sample has additional copy number changes as compared to the first tumor, it is likely that the first tumor is the primary and the second tumor is the metastasis).
Reicher and Timms are analogous art because they are from the same problem-solving area, aiding pathologists in the analysis of tissue samples.
Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Reicher and Timms before him or her, to combine the teachings of Reicher and Timms. The rationale for doing so would have been to obtain the benefit of determining whether two or more given tumors are independent or related.
Therefore, it would have been obvious to combine Reicher and Timms to obtain the invention as specified in the instant claim(s).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED H ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached Mon – Th 10AM-8PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED H ZUBERI/Primary Examiner, Art Unit 2178