DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 8, 11-13, 18, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Armitage (CA 3144315 A1 cited with WO 2020260901) in view of Hariharan (EP 3477555 A1).
Regarding claims 1, 11, 21. Armitage teaches a method comprising:
training a first artificial intelligence (AI) model based on first training data comprising images labeled with imaging biomarkers, the first Al model trained to identify a plurality of imaging biomarker features in at least one image (see Fig. 2a-2c, paragraph 146, in step 256, the labelled sensor dataset(s) are stored as the second labelled sensor dataset for use in training one or more ML techniques to generate one or more ML model(s) configured to estimate one or more corresponding clinical biomarker(s) of interest associated with the second labelled training dataset. The ML model(s) may then receive as input the output of the first set of ML model(s) including the extracted and classified segments of sensor data, each of which have been classified based on one or more clinical biomarker component(s));
training a second Al model based on second training data comprising sets of imaging biomarker features associated with task-specific labels, the second Al model trained to identify at least one feature based at least partially on a set of imaging biomarker features (see Fig. 2d, paragraph 149, in step 264, the first labelled sensor dataset is input to one or more of the second set of ML technique(s) for training one or more ML model(s) of the second set of ML model(s). The second set of ML model(s) are configured for estimating at least one of: one or more clinical biomarker(s); one or more biomarkers (or intermediate biomarkers); and one or more clinical biomarker component(s). The output of each ML model of the second set of ML models may include an estimate of: one or more clinical biomarker(s); one or more biomarker(s); one or more clinical biomarker component(s) and the like. The output estimates of one or more clinical biomarker(s); one or more biomarker(s); one or more clinical biomarker component(s) and the like may be compared with the corresponding second labelled sensor training dataset associated with the first labelled sensor training dataset. That is, for example, the output estimates of one or more clinical biomarkers based on the segments and labels of the first labelled sensor training data set are compared with the corresponding clinical biomarker labels of those segments);
processing at least one input image with the first Al model to generate a first Al model output (see Fig. 2d, paragraph 148, the first labelled sensor dataset is used as input to one or more ML technique(s) for training one or more ML models of the second set of ML models. The second labelled sensor dataset is used for training the ML techniques to output the correct or an estimate of the clinical biomarker associated with one or more segments of the sensor data of the first labelled sensor dataset); and
processing the first Al model output with the second Al model to generate a second Al model output (see Fig. 2d, paragraph 154, step 264 or 269 may further include using the output of one or more of the ML model(s) of the second set of ML models as an input to one or more other ML model(s) of the second set of ML models for estimating one or more clinical biomarker(s) of interest, in which the input to the one or more other ML model(s) may be based on at least one of: one or more of the estimated clinical biomarkers output by the one or more ML model(s); one or more of the biomarkers output by the one or more ML model(s); one or more of the clinical biomarker components output by the one or more ML model(s)).
However, Armitage does not expressly teach task-specific feature.
Hariharan teaches that each weight is associated with a respective feature and task combination. One or more task-specific features are identified for a given task based on the weights. A model is generated based on the one or more task-specific features, wherein the one or more task-specific features is a subset of a larger feature set for which the trained neural network was trained (see paragraph 5).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Armitage by Hariharan for providing that a model is generated based on the one or more task-specific features, wherein the one or more task-specific features is a subset of a larger feature set for which the trained neural network was trained, as task-specific feature. Therefore, combining the elements from prior arts according to known methods and technique, such as that a model is generated based on the one or more task-specific features, would yield predictable results.
Regarding claims 2 and 12. The combination teaches the method of claim 1,
wherein each imaging biomarker feature of the plurality of imaging biomarker features comprises at least one value corresponding to a specific aspect of the at least one image or video including the at least one image (see Hariharan, paragraph 26, from the weights, features can be broadly classified based on the roles played by them in teaching the multi-task network into: common features (e.g., high WS values, such as WS weights of 1), task specific features (e.g., high Wi,j, ... values, such as weights of 1), and/or redundant features (i.e., noise)).
Regarding claims 3 and 13. The combination teaches the method of claim 1,
wherein the first Al model output comprises a set of imaging biomarker features (see Armitage, paragraph 56, inputting sensor data to a first set of ML technique(s) for training a first set of ML model(s) for outputting an indication of extracted sensor data segments and a classification of each segment in relation to one or more clinical biomarker component(s); updating the first set of ML technique(s) based on comparing the output indication of extracted sensor data segments and clinical biomarker component classification with the corresponding labelled sensor data segments), and
wherein the second Al model output comprises at least one task-specific feature (see Armitage, paragraph 66, a clinical biomarker estimation unit for estimating one or more clinical biomarker(s) of the subject using a second set of ML model(s) configured to estimate the one or more clinical biomarker(s) of the subject based on the extracted portions of sensor data).
Regarding claims 8 and 18. The combination teaches the method of claim 1, wherein the second Al model is configured to identify the at least one task-specific feature based at least partially on at least one of the following:
the at least one input image, the at least one image, a video including the at least one image, an output of an image or video processing algorithm based on the at least one image or a video including the at least one image, or any combination thereof (see Armitage, Fig. 2a and 2b, paragraph 83, for generating labelled training dataset for use in the first processing stage of the data processing pipeline).
Claim(s) 4-5 and 14-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Armitage (CA 3144315 A1 cited with WO 2020260901) in view of Hariharan (EP 3477555 A1), and further in view of Yip (US-PAT-NO: 10957041 B2).
Regarding claims 4 and 14. The combination does not expressly teach the method of claim 1, wherein the task-specific labels comprise severity metrics.
Yip teaches that the deep learning frameworks include a multiscale configuration that uses a tiling strategy to accurately capture structural and local histology of various diseases (e.g., cancer tumor prediction). These multiscale configurations perform classification on (labeled or unlabeled) histopathology images using classifiers trained to classify tiles of received histopathology images (see Col. 11, lines 10-16).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Yip for providing the deep learning frameworks include a multiscale configuration that uses a tiling strategy to accurately capture structural and local histology of various diseases (e.g., cancer tumor prediction) labeled. Therefore, the combination of the teaching, suggestion, or motivation in the prior art would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention.
Regarding claims 5 and 15. The combination teaches the method of claim 1,
wherein the first Al model is configured to process a images to identify the plurality of imaging biomarker features (see Armitage, Fig. 2b, paragraph 138, one or more ML model(s) based on one or more ML technique(s) may be trained by inputting the retrieved raw sensor data to a first set of ML technique(s). The first set of ML technique(s) configured for training the first set of ML model(s) that output an indication of extracted sensor data segments and a classification of each segment in relation to one or more clinical biomarker component(s) associated with the one or more clinical biomarkers of interest).
However, the combination does not expressly teach to process a sequence of images.
Yip teaches that biomarker detection may be enhanced by combining imaging metrics with structured clinical and sequencing data to develop enhanced biomarkers (see Col. 12, lines 17-19).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Yip for providing biomarker detection may be enhanced by combining imaging metrics with structured clinical and sequencing data to develop enhanced biomarkers, as to process a sequence of images. Therefore, combining the elements from prior arts according to known methods and technique, such as sequencing data for developing enhanced biomarkers, would yield predictable results.
Claim(s) 6-7 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Armitage (CA 3144315 A1 cited with WO 2020260901) in view of Hariharan (EP 3477555 A1), and further in view of Locke (US-PAT-NO: 11551357 B2).
Regarding claims 6 and 16. The combination does not expressly teach the method of claim 1, further comprising:
generating the first training data based on annotations selected from a plurality of options presented on at least one graphical user interface.
Locke teaches that the annotation log 801 may serve as a centralized view for consultation and informal sharing to be reviewed within the context of an annotation or area of interest. A user (e.g., pathologist) requesting a consultation and the consulting pathologist may read, make comments, and/or have an ongoing dialogue specific to each annotation, each with a time stamp. Users may also select the thumbnails within the annotation log 801 to view that area of interest in large scale within a main viewer window (see Fig. 8, Col. 16, lines 57-65).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Locke for providing Users may also select the thumbnails within the annotation log 801 to view that area of interest in large scale within a main viewer window, as generating the first training data based on annotations selected from a plurality of options presented on at least one graphical user interface. Therefore, combining the elements from prior arts according to known methods and technique, such as Users may also select the thumbnails within the annotation in view area, would yield predictable results.
Regarding claims 7 and 17. The combination teaches the method of claim 1, further comprising:
generating the second training data based on input associating the task-specific labels with imaging-biomarker inputs (see Fig. 2d, paragraph 149, in step 264, the first labelled sensor dataset is input to one or more of the second set of ML technique(s) for training one or more ML model(s) of the second set of ML model(s). The second set of ML model(s) are configured for estimating at least one of: one or more clinical biomarker(s); one or more biomarkers (or intermediate biomarkers); and one or more clinical biomarker component(s). The output of each ML model of the second set of ML models may include an estimate of: one or more clinical biomarker(s); one or more biomarker(s); one or more clinical biomarker component(s) and the like).
However, the combination does not expressly user input associating the task-specific labels.
Locke teaches that the navigation menu 301 may also include a view modes input for adjusting the view (e.g., splitting the screens to view multiple slides and/or stains at once). A zoom menu 302 may be used for quickly adjusting the zoom level of the target image 303. The target image may be displayed in any color. For example, a dark, neutral color scheme may be used to display the details of the target image 303. The viewing application tool 101 may include a positive/negative indicator for whether possible disease is present at an (x, y) coordinate in the image. Additionally, a confirmation and/or edit of the positive/negative indicator may be performed by a user (see col. 14, lines 41-52).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Locke for providing The viewing application tool 101 may include a positive/negative indicator for whether possible disease is present at an (x, y) coordinate in the image. Additionally, a confirmation and/or edit of the positive/negative indicator may be performed by a user, as user input associating the task-specific labels. Therefore, combining the elements from prior arts according to known methods and technique, such as a confirmation and/or edit of the positive/negative indicator may be performed by a user, would yield predictable results.
Claim(s) 9-10 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Armitage (CA 3144315 A1 cited with WO 2020260901) in view of Hariharan (EP 3477555 A1), and further in view of Halmann (PGPUB: 20220061813 A1).
Regarding claims 9 and 19. The combination does not expressly teach the method of claim 1, wherein the plurality of imaging biomarker features comprises at least one of the following features of at least one ultrasound image:
A-line, B-line, pleural line irregularity, or any combination thereof.
Halmann teaches that the ultrasound images may comprise the pixel parameter values (e.g., brightness values) calculated at 306, and an annotated version of each ultrasound image that comprises the pixel parameter values overlaid with visual indications (e.g., annotations) regarding B-lines, the pleural position, and/or pleural irregularities may be output to the display in real-time. In some examples, the display is included in the ultrasound imaging system, such as display device 118. For example, B-lines may be highlighted with a solid vertical line, and the upper and/or lower border (e.g., boundary) of the pleural line may be indicated with markers or traced (e.g., with a line) (see Fig. 3, paragraph 53).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Halmann for providing the ultrasound images may comprise the pixel parameter values (e.g., brightness values) calculated at 306, and an annotated version of each ultrasound image that comprises the pixel parameter values overlaid with visual indications (e.g., annotations) regarding B-lines, the pleural position, and/or pleural irregularities may be output to the display in real-time, as wherein the plurality of imaging biomarker features comprises at least one of the following features of at least one ultrasound image: A-line, B-line, pleural line irregularity, or any combination thereof. Therefore, combining the elements from prior arts according to known methods and technique, such as visual indications (e.g., annotations) regarding B-lines, the pleural position, and/or pleural irregularities, would yield predictable results.
Regarding claims 10 and 20. The combination does not expressly teach the method of claim 1, wherein the plurality of imaging biomarkers is predefined.
Halmann teaches that visually indicating the irregular pleura in each of the plurality of lung ultrasound images on the display in real-time during the acquiring comprises: identifying borders of a pleural line in each of the plurality of lung ultrasound images based on at least pixel brightness; determining an irregularity score for each location of the pleural line; and visually distinguishing locations of the pleural line having irregularity scores less than a threshold from locations of the pleural line having irregularity scores greater than or equal to the threshold. In a second example of the method, which optionally includes the first example, the irregular pleura comprise the locations of the pleural line having the irregularity scores greater than or equal to the threshold, and determining the irregularity score for each location of the pleural line (see paragraph 76).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination by Halmann for providing identifying borders of a pleural line in each of the plurality of lung ultrasound images based on at least pixel brightness; determining an irregularity score for each location of the pleural line, as wherein the plurality of imaging biomarkers is predefined. Therefore, combining the elements from prior arts according to known methods and technique would, such as identifying borders of a pleural line in each of the plurality of lung ultrasound images, yield predictable results.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIN JIA whose telephone number is (571)270-5536. The examiner can normally be reached 9:00 am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571)272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XIN JIA/Primary Examiner, Art Unit 2663