DETAILED ACTION
This office action is in response to the communication received on September 26, 2025 concerning application No. 18/594,724 filed on March 4, 2024.
Claims 1-6 and 8-20 are currently pending.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 09/26/2025 regarding the claim objections have been fully considered. The amendments to the claims have been entered and overcome the claim objections of claims 1 and 2 previously set forth. Examiner notes further claim objections have been identified.
Applicant's arguments filed 09/26/2025 regarding the 35 USC 101 rejection have been fully considered. The amendments to the claims have been entered and overcome the 35 USC 101 rejection of claims previously set forth. Specifically, the claim amendments integrate the judicial exception into a practical application.
Applicant's arguments filed 09/26/2025 regarding the prior art rejection have been fully considered but they are not persuasive. In response to the applicant’s arguments that the prior art fails to teach “detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images” and “automatically capturing a canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow”, examiner respectfully disagrees. Hao (CN11528907A) which was previously relied upon for the rejection of claim 13 is now being relied upon for teaching the newly filed claim amendments recited above. Please see the rejection of the claims below for how the Hao reference is being applied to teach the deficiencies of Halmann and why it would have been obvious to combine the references.
Claim Objections
Claims 13 and 19 are objected to because of the following informalities:
Claim 13, line 3, “a user” should read “the user”,
Claim 19, line 3, “a user” should read “the user”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, and 10-20 is/are rejected under 35 U.S.C. 103 as being unpatentable by Halmann et al. (US 20220061813, hereinafter Halmann) in view of Hao et al. (CN111528907A, hereinafter Hao).
Regarding claim 1, Halmann teaches an ultrasound imaging system (medical imaging system 100 in fig. 1 and image processing system 200 shown in fig. 2) configured for conducting a diagnostic procedure on a subject ([0016] and [0042] disclose using the system to diagnose a patient), the system comprising:
an ultrasound imaging probe (probe 106 in fig. 1);
a computing system (the electronic circuitry of system 100 in fig. 1); and
a computer-readable storage medium, storing instructions that ([0040] discloses the computer processor performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium), when executed by a processor of the computing system cause the ultrasound imaging system to:
obtain a plurality of ultrasound images of at least a portion of a lung of a subject ([0043]-[0046] disclose generating ultrasound images from the acquired ultrasound data acquired from and imaged lung);
process the plurality of ultrasound images to automatically classify B lines in the acquired plurality of ultrasound images ([0049]-[0050] discloses detecting irregularities in the ultrasound image including B-lines) comprising:
distinguish B-lines comprised within the plurality of ultrasound images from one or more alternate features comprised within the plurality of ultrasound images to obtain featurized B-lines associated with the acquired plurality of ultrasound images ([0049]-[0050] discloses identifying B-lines in each ultrasound image which involves identifying irregularities (b-lines) from non-irregularities (alternate features));
automatically capture a canonical image of the lung of the subject ([0045] discloses automatically acquiring the data including initiating the scanning process which includes capturing images of the lung as shown in fig. 4 and 6A), wherein the canonical image includes a center of a field of view of the ultrasound image, a first rib shadow on a first side of the center of the field of view, a second rib shadow on a second side of the center of the field of view, and a pleural line in the center of the field of view ([0054], [0067], figs. 4 and 6A disclose the captured image includes the pleural line in the center of the image represented by the markers and the dark portion (shadow) of the image which represents a first rib shadow and a second rib shadow each on a side of the pleural line in the center of the field of view);
display a user interface including the canonical image with a first rib shadow, a second rib shadow, and a pleural line annotation for the pleural line (fig. 6A discloses displaying the image on a display 601 which includes an annotation of the pleural line (markers). The image also includes the first rib shadow and the second rib shadow);
automatically determine, based at least in part on the featurized B-lines and the canonical image, one or more B-line classifiers ([0060] discloses scoring the ultrasound image based on the number of identified B-lines, the score corresponds to the B-line classifier. [0054]-[0056] disclose the B-lines are determined based on the location of the pleural line within the obtained image, therefore the B-line classifier is determined based on the featurized B-lines and canonical image containing the pleural line); and
output the one or more B-line classifiers to a user of the ultrasound imaging system ([0061] discloses outputting the annotated ultrasound image with the highest score for display, by displaying the image the B-line classifier is being outputted to a user).
Halmann does not specifically teach detect a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images; automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow; and the display of the canonical image includes a first rib shadow annotation and a second rib shadow annotation.
However,
Hao in a similar field of endeavor teaches detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images (pg. 3, “the rib and rib shadow…of the healthy sample lung ultrasonic image can be automatically and correctly labeled through the trained supervised convolutional neural network”. Abstract further discloses the labeling includes labeling rib shadows. Pg. 4 further discloses multiple images are marked meaning a plurality of ultrasound images are acquired); automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow (pg. 3 discloses after all of the areas of the image are outlined the outlined images are further selected for to be imported for characteristic extraction. By selecting the image for further analysis the image is being captured based on the rib area being outlined (detected) within the image. This corresponds to the capture procedure outlined in [0160] of the present applications specification); and displaying the canonical image includes a first rib shadow annotation and a second rib shadow annotation (Abstract and pg. 3 discloses the labeled image is displayed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the processor of Halmann to include detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images; automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow; and displaying the canonical image includes a first rib shadow annotation and a second rib shadow annotation in order to increase the accuracy of the diagnosis, as recognized by Hao (Abstract).
Regarding claim 2, Halmann teaches a method for ultrasound imaging, the method comprising:
obtaining a plurality of ultrasound images of at least a portion of a lung of a subject ([0043]-[0046] disclose generating ultrasound images from the acquired ultrasound data acquired from and imaged lung);
processing the plurality of ultrasound images to automatically classify B lines in the acquired plurality of ultrasound images ([0049]-[0050] discloses detecting irregularities in the ultrasound image including B-lines) comprising:
distinguishing B-lines comprised within the plurality of ultrasound images based at least in part on one or more alternate features comprised within the plurality of ultrasound images to obtain featurized B-lines associated with the acquired plurality of ultrasound images ([0049]-[0050] discloses identifying B-lines in each ultrasound image which involves identifying irregularities (b-lines) from non-irregularities (alternate features). [0054]-[0055] and fig. 4 additionally show the alternate features include pleural lines);
automatically capturing a canonical image of the lung of the subject ([0045] discloses automatically acquiring the data including initiating the scanning process which includes capturing images of the lung as shown in fig. 4 and 6A), wherein the canonical image includes a center of a field of view of the ultrasound image, a first rib shadow on a first side of the center of the field of view, a second rib shadow on a second side of the center of the field of view, and a pleural line in the center of the field of view ([0054], [0067], figs. 4 and 6A disclose the captured image includes the pleural line in the center of the image represented by the markers and the dark portion (shadow) of the image which represents a first rib shadow and a second rib shadow each on a side of the pleural line in the center of the field of view);
displaying a user interface including the canonical image with a first rib shadow, a second rib shadow, and a pleural line annotation for the pleural line (fig. 6A discloses displaying the image on a display 601 which includes an annotation of the pleural line (markers). The image also includes the first rib shadow and the second rib shadow);
automatically determining, based at least in part on the featurized B-lines and the canonical image, one or more B-line classifiers ([0060] discloses scoring the ultrasound image based on the number of identified B-lines, the score corresponds to the B-line classifier. [0054]-[0056] disclose the B-lines are determined based on the location of the pleural line within the obtained image, therefore the B-line classifier is determined based on the featurized B-lines and canonical image containing the pleural line); and
outputting the one or more B-line classifiers to a user ([0061] discloses outputting the annotated ultrasound image with the highest score for display, by displaying the image the B-line classifier is being outputted to a user).
Halmann does not specifically teach detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images; automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow; and the display of the canonical image includes a first rib shadow annotation and a second rib shadow annotation.
However,
Hao in a similar field of endeavor teaches detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images (pg. 3, “the rib and rib shadow…of the healthy sample lung ultrasonic image can be automatically and correctly labeled through the trained supervised convolutional neural network”. Abstract further discloses the labeling includes labeling rib shadows. Pg. 4 further discloses multiple images are marked meaning a plurality of ultrasound images are acquired); automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow (pg. 3 discloses after all of the areas of the image are outlined the outlined images are further selected for to be imported for characteristic extraction. By selecting the image for further analysis the image is being captured based on the rib area being outlined (detected) within the image. This corresponds to the capture procedure outlined in [0160] of the present applications specification); and displaying the canonical image includes a first rib shadow annotation and a second rib shadow annotation (Abstract and pg. 3 discloses the labeled image is displayed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the processor of Halmann to include detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images; automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow; and displaying the canonical image includes a first rib shadow annotation and a second rib shadow annotation in order to increase the accuracy of the diagnosis, as recognized by Hao (Abstract).
Regarding claim 3, Halmann in view of Hao teaches the system of claim 1, as set forth above. Halmann further teaches the one or more B-line classifiers are provided based at least in part on a detection of one or more pleural lines, and/or based at least in part on a detection of one or more normal A-lines ([0056] discloses the number of identified B-lines is based on the detection of the multiple pleural lines within the image, therefore the B-line classifier is provided based at least in part on a detection of one or more pleural lines).
Regarding claim 4, Halmann in view of Hao teaches the system of claim 1, as set forth above. Halmann further teaches the plurality of ultrasound images are comprised in an image clip ([0046] discloses ultrasound images (plurality) are generated from the ultrasound data, the plurality of ultrasound images makes up the image clip), and the one or more B-line classifiers are determined for each image of the image clip ([0060] discloses each ultrasound image is scored, meaning each image has its B-line classifier determined).
Regarding claim 10, Halmann in view of Hao teaches the system of claim 1, as set forth above. Halmann further teaches the alternate features comprise: A-lines, pleural lines, or rib shadows ([0054]-[0055] and fig. 4 disclose identifying the B-lines based on the location of the pleural lines).
Regarding claim 11, Halmann in view of Hao teaches the system of claim 10, as set forth above. Halmann further teaches B-lines are distinguished from A-lines, pleural lines, and rib shadows ([0054]-[0055] and fig. 4 disclose the B-lines are distinguished form the pleural lines, and rib shadows. [0003] since A-liens are horizontal lines and B-lines are vertical artifacts, the B-lines are also being distinguished from A-lines).
Regarding claim 12, Halmann in view of Hao teaches the system of claim 10, as set forth above. Halmann further teaches the processor is further configured to annotate the alternate features in one or more of the plurality of ultrasound images ([0054]-[0055] and fig. 4 disclose the pleural lines are annotated in the images as shown in fig. 4).
Regarding claim 13, Halmann in view of Hao teaches the system of claim 12, as set forth above. Hao further teaches each B-line, A-line, pleural line, and rib shadow present in the plurality of ultrasound images is annotated and displayed to a user (pg. 4 discloses, “after all areas (ribs, pleural lines, lung sliding outlines, A/B lines) are outlined, the system imports the outlined images into a ‘marked patient image’ sub-library”. The abstract further teaches the marking of the ribs includes rib shadows. The Abstract and Pg. 3 disclose the labeled image is displayed. Therefore each of each of B-line, A-line, pleural line, and rib shadow present in the plurality of ultrasound images is annotated).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Halmann in view of Hao to have each B-line, A-line, Pleural line, and rib shadow present in the plurality of ultrasound images be annotated and displayed to a user in order to increase the accuracy of the diagnosis, as recognized by Hao (Abstract).
Regarding claim 14, Halmann in view of Hao teaches the system of claim 13, as set forth above. Halmann further teaches the annotation and display is performed in real time during acquisition of the ultrasound images ([0042] and [0053] disclose the annotation and display is performed in real time).
Regarding claim 15, Halmann in view of Hao teaches the system of claim 13, as set forth above. Halmann further teaches the annotation and display is performed offline using a previously acquired ultrasound image clip ([0042] discloses performing the method 300 which includes the annotation and displaying offline after the ultrasound images are acquired).
Regarding claim 16, Halmann in view of Hao teaches the system of claim 1, as set forth above. Halmann further teaches the distinguishing is performed by submitting the plurality of ultrasound images to a trained machine learning model ([0032]-[0034] disclose identifying the anatomical irregularities (distinguishing) is performed using a neural network such as a trained machine learning model).
Regarding claim 17, Halmann in view of Hao teaches the system of claim 16, as set forth above. Halmann further teaches the trained machine learning model comprises one or more neural networks ([0032] discloses the model for identifying the anatomical irregularities is a neural network).
Regarding claim 18, Halmann in view of Hao teaches the system of claim 1, as set forth above. Halmann further teaches classifying a pathology of the subject based on the one or more B-line classifiers ([0062] “outputting a suggested diagnosis to the display…he presence of B-lines and consolidation may indicate an accumulation of fluid, such as due to bacterial or viral pneumonia”. [0049] discloses the number of B-lines is involved in the diagnosis).
Regarding claim 19, Halmann in view of Hao teaches the system of claim 18, as set forth above. Halmann further teaches the pathology is lung deaeration ([0049] discloses the irregularities correspond to the decrease of air content in the lung), and the processor is further configured to alert a user to a severity of the lung deaeration ([0067] and fig. 6B disclose displaying a quantification of a percentage of pleural irregularities which corresponds to a severity of the lung deaeration).
Regarding claim 20, Halmann teaches a non-transitory computer-readable medium, storing instructions that, when executed by a processor of a computer ([0040] discloses the computer processor performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium), cause the computer to:
obtain a plurality of ultrasound images of at least a portion of a lung of a subject ([0043]-[0046] disclose generating ultrasound images from the acquired ultrasound data acquired from and imaged lung);
process the plurality of ultrasound images to automatically classify B lines in the acquired plurality of ultrasound images ([0049]-[0050] discloses detecting irregularities in the ultrasound image including B-lines) comprising:
distinguish B-lines comprised within the plurality of ultrasound images from one or more alternate features comprised within the plurality of ultrasound images to obtain featurized B-lines associated with the acquired plurality of ultrasound images ([0049]-[0050] discloses identifying B-lines in each ultrasound image which involves identifying irregularities (b-lines) from non-irregularities (alternate features). [0054]-[0055] and fig. 4 additionally show the alternate features include pleural lines);
automatically capture a canonical image of the lung of the subject ([0045] discloses automatically acquiring the data including initiating the scanning process which includes capturing images of the lung as shown in fig. 4 and 6A), wherein the canonical image includes a center of a field of view of the ultrasound image, a first rib shadow on a first side of the center of the field of view, a second rib shadow on a second side of the center of the field of view, and a pleural line in the center of the field of view ([0054], [0067], figs. 4 and 6A disclose the captured image includes the pleural line in the center of the image represented by the markers and the dark portion (shadow) of the image which represents a first rib shadow and a second rib shadow each on a side of the pleural line in the center of the field of view);
display a user interface including the canonical image with a first rib shadow, a second rib shadow, and a pleural line annotation for the pleural line (fig. 6A discloses displaying the image on a display 601 which includes an annotation of the pleural line (markers). The image also includes the first rib shadow and the second rib shadow);
automatically determine, based at least in part on the featurized B-lines and the canonical image, one or more B-line classifiers ([0060] discloses scoring the ultrasound image based on the number of identified B-lines, the score corresponds to the B-line classifier. [0054]-[0056] disclose the B-lines are determined based on the location of the pleural line within the obtained image, therefore the B-line classifier is determined based on the featurized B-lines and canonical image containing the pleural line); and
output the one or more B-line classifiers to a user ([0061] discloses outputting the annotated ultrasound image with the highest score for display, by displaying the image the B-line classifier is being outputted to a user).
Halmann does not specifically teach detect a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images; automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow; and the display of the canonical image includes a first rib shadow annotation and a second rib shadow annotation.
However,
Hao in a similar field of endeavor teaches detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images (pg. 3, “the rib and rib shadow…of the healthy sample lung ultrasonic image can be automatically and correctly labeled through the trained supervised convolutional neural network”. Abstract further discloses the labeling includes labeling rib shadows. Pg. 4 further discloses multiple images are marked meaning a plurality of ultrasound images are acquired); automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow (pg. 3 discloses after all of the areas of the image are outlined the outlined images are further selected for to be imported for characteristic extraction. By selecting the image for further analysis the image is being captured based on the rib area being outlined (detected) within the image. This corresponds to the capture procedure outlined in [0160] of the present applications specification); and displaying the canonical image includes a first rib shadow annotation and a second rib shadow annotation (Abstract and pg. 3 discloses the labeled image is displayed).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the processor of Halmann to include detecting a first rib shadow and a second rib shadow of the subject based on the acquired plurality of ultrasound images; automatically capturing the canonical image of the lung of the subject based on detecting the first rib shadow and the second rib shadow; and displaying the canonical image includes a first rib shadow annotation and a second rib shadow annotation in order to increase the accuracy of the diagnosis, as recognized by Hao (Abstract).
Claim(s) 5-6 is/are rejected under 35 U.S.C. 103 as being unpatentable over Halmann in view of Hao as applied to claim 4 above, and further in view of Fiegoli et al. (US 20230404541, hereinafter Fiegoli).
Regarding claim 5, Halmann in view of Hao teaches the system of claim 4, as set forth above. Halmann in view of Hao does not specifically teach assigning a B-line score to the image clip based on the one or more B-line classifiers.
However,
Fiegoli in a similar field of endeavor teaches assigning a B-line score to the image clip based on the one or more B-line classifiers ([0117] discloses analyzing multiple frames in a cine (clip) and determining a count of B-lines among the analyzed frames in the cine which represents the B-line score of the image clip).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Halmann in view of Hao to assign a B-line score to the image clip based on the one or more B-line classifiers in order to reduce the number of individual data points the user needs to analyze, thereby making the procedure more efficient.
Regarding claim 6, Halmann in view of Hao teaches the system of claim 5, as set forth above. Fiegoli further teaches the one or more B-line classifiers comprise a B-line count, and the B-line score assigned to the image clip comprises a total number of detected B-lines ([0117] discloses the count represents the total number of B-lines in a cine (image clip)).
Claim(s) 8-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Halmann in view of Hao and Fiegoli as applied to claim 5 above, and further in view of Halmann et al. (US 20170086790, hereinafter Halmann 2).
Regarding claim 8, Halmann in view of Hao and Fiegoli teaches the system of claim 5, as set forth above. Halmann in view of Hao and Fiegoli does not specifically teach determining that the assigned B-line score meets a threshold and automatically saving the image clip in a memory of an ultrasound system..
However,
Halmann 2 in a similar field of endeavor teaches determining that the assigned B-line score meets a threshold and automatically saving the image clip in a memory of an ultrasound system ([0038] discloses when the ultrasound image has the highest score it is stored. The imaging having the highest score is considered the threshold. [0065] discloses the storing is performed automatically).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system disclosed by Halmann in view of Hao and Fiegoli to determine that the assigned B-line score meets a threshold and automatically saving the image clip in a memory of an ultrasound system in order for the image clip to be used at a later time for training.
Regarding claim 9, Halmann in view of Hao, Fiegoli and Halmann 2 teaches the system of claim 8, as set forth above. Halmann further teaches identifying a subset of the plurality of images comprised in the image clip which are representative of the clip; and displaying one or more images of the representative subset by a display ([0061] discloses outputting the ultrasound image having the highest score to the display, the identification of the image with the highest score is considered the identifying a subset of the plurality of images).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW BEGEMAN whose telephone number is (571)272-4744. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Keith Raymond can be reached at 5712701790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANDREW W BEGEMAN/Examiner, Art Unit 3798