DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is in response to Applicant’s remarks, filed on 12/2/2025. The amendments to claim(s) 1, 6, 9, 12 and 14 have been entered. No claims have been cancelled or added by Applicant. Accordingly, claim(s) 1-14 remain pending for examination on the merits.
Response to Arguments
Applicant’s arguments, see p. 5-7, with respect to the rejection of claim(s) 1-14 have been fully considered.
Regarding the rejection(s) under 35 U.S.C. § 112, Examiner respectfully agrees with the remarks, and the rejections of claims 6 and 12 have been withdrawn. However, new rejection(s) under 35 U.S.C. § 112(b) are issued.
Applicant’s arguments with respect to claim(s) 1-14 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. New grounds of rejection are made in view of the following: new amendments provided by Applicant and attached remarks; updated search and review of pertinent, eligible prior art; and/or different interpretation of the previously applied references.
Examiner respectfully notes that Applicant’s arguments only address independent claim(s) 1, 9 and 14, and no remarks regarding the subject matter of the dependent claim(s) have been presented. Accordingly, the rejections to dependent claims 2-8 and 10-13 are modified to address Applicant’s amendments and the new rejection to independent claim(s) 1, 9 and 14 and are sustained. The rejections of claim(s) 1-14 under 35 U.S.C. § 102 and 35 U.S.C. § 103 are maintained.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim(s) 8 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 8 recites the limitation "providing, via a user interface, the modified localization report to the user”. There is insufficient antecedent basis for this limitation in the claim. It is not clear what the “user interface” in claim 8 is specifically referring to: in an interpretation, the “user interface” may refer to the “user interface” in claim 1; and in another, distinct interpretation the “user interface” may refer to a new and distinct user interface. For the purposes of examination, the broadest reasonable interpretation of the claim language, including those discussed above, is applied to limitation.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 6-10 and 12-14 is/are rejected under 35 U.S.C. 102(a)(2) as being clearly anticipated by de Jonge et al. (US20170360402A1, 2017-12-21; hereinafter “de Jonge”).
Regarding claim 1, de Jonge teaches a method for providing localization information using an ultrasound system (“A method, comprising: […] at least one instruction indicating how the operator is to reposition the ultrasound device;” [clm 11]; “the method may further include identifying an anatomical feature of the subject in the ultrasound image using an automated image processing technique.” [0173]; [0171-0180, 0225-0256], [fig. 9-12]), comprising:
receiving a plurality of ultrasound images for a first imaged region (“obtaining an ultrasound image captured by the ultrasound device” [clm 17]; “In some embodiments, obtaining the ultrasound image of the subject comprises obtaining a plurality of ultrasound images of the subject, and wherein identifying the at least one anatomical feature of the subject comprises identifying a ventricle in each of at least some of the plurality of ultrasound images using a multi-layer neural network.” [0076]; The method enables an operator using the ultrasound device to capture medically relevant ultrasound images and may assist in interpreting the obtained images [0171-0180, 0225-0256], [fig. 9-12]);
receiving an input comprising location information about the first imaged region, wherein the input is derived from a user's interaction with the ultrasound system while the user obtains the plurality of ultrasound images for the first imaged region, and wherein the input corresponds to or is derived from: (a) an ultrasound acquisition setting or (b) the user's interaction with an ultrasound probe of the ultrasound system (“obtaining an image of an ultrasound device being used by an operator, […] generating a composite image at least in part by overlaying, onto the image of the ultrasound device, at least one instruction indicating how the operator is to reposition the ultrasound device; and presenting the composite image to the operator.” [clm 11]; “the ultrasound image may be analyzed using the automated image processing technique to identify the anatomical view contained in the ultrasound image.” [0148]; “The act 1003 of generating the composite image may comprise an act 1004 of identifying a pose of an ultrasound device in the image and an act 1006 of overlaying the instruction onto the image using the identified pose.” [0233]; An image of the operator using the ultrasound device during imaging (i.e., user interaction) and the acquired ultrasound image are used to determine whether the target anatomical view is captured [0171-0180, 0225-0256], [fig. 9-12; see fig. 5B reproduced below]);
processing, using a trained localization algorithm, both: (i) one or more of the received plurality of ultrasound images and (ii) the received input to generate a localization report for the first imaged region, the localization report comprising one or more of a coordinate, a contour, and a structure identification (“any of a variety of automated image processing techniques may be employed to determine whether an ultrasound image contains the target anatomical view. Example automated image processing techniques include machine learning techniques such as deep learning techniques.” [0149]; “a pose (e.g., position and/or orientation) of the ultrasound device in the captured image may be identified using an automated image processing technique (e.g., a deep learning technique) and the information regarding the pose of the ultrasound device may be used to overlay an instruction onto at least part of the ultrasound device in the captured image.” [0164]; “deep learning techniques may be employed to identify the presence of particular organs (such as a heart or a lung) in the ultrasound image. Once the organs in ultrasound image have been identified, the characteristics of the organs (e.g., shape and/or size) may be analyzed to determine a medical parameter of the subject” [0171]; Deep learning techniques (i.e., trained localization algorithm) may be applied to the image of the ultrasound device and the acquired ultrasound image to determine the presence of a target anatomical view and to provide instructions to the user [0171-0180, 0225-0256], [fig. 9-12; see fig. 5B reproduced below]); and
providing, via a user interface, the localization report to the user (“presenting the composite image to the operator” [clm 11]; “The computing device 704 may comprise an integrated display 706 that is configured to display one or more user interface screens of the diagnostic application.” [0210]; “The image acquisition assistance screen may display an ultrasound image 726 captured using the ultrasound device. In some embodiments, the image acquisition assistance screen may display one or more instructions […] Once the ultrasound device has been properly positioned, the image acquisition assistance screen may display an indication that the ultrasound device is properly positioned.” [0215]; “The diagnostic results screen may display diagnostic results 728, 732 determined from analyzing the captured ultrasound image 730.” [0216]; The composite image presented to the operator via computing device screen (i.e., user interface) provides examination instructions and confirms the correct position of the ultrasound device, and may present analysis of the anatomical features in the ultrasound image [0171-0180, 0225-0256], [fig. 9-12; see fig. 5B, 7E-7F reproduced below]).
PNG
media_image1.png
920
941
media_image1.png
Greyscale
PNG
media_image2.png
712
985
media_image2.png
Greyscale
The ultrasound device position and the acquired ultrasound images are analyzed using deep learning techniques to determine instructions and to capture the target anatomical view for subsequent diagnosis (de Jonge [fig. 5B, 7E-7F])
Regarding claim 2, de Jonge teaches the method of claim 1,
de Jonge further teaching further comprising the steps of:
receiving ultrasound scan information for a present or future scan (“obtaining an image of an ultrasound device being used by an operator, the image being captured by an imaging device different from the ultrasound device;” [clm 11]; “The method comprises (a) receiving an acquisition intent instruction for a final ultrasound imagery;” [0103]; Intended acquisition (i.e., scan information) information related to the final ultrasound image may be received for a real-time ultrasound imaging procedure [0171-0180, 0225-0256], [fig. 7A-7F, 9-12], [see claim 1 rejection]); and
directing, based on the received ultrasound scan information, the user to obtain the plurality of ultrasound images for the first imaged region (“generating a composite image at least in part by overlaying, onto the image of the ultrasound device, at least one instruction indicating how the operator is to reposition the ultrasound device; and” [clm 11]; “(b) receiving a first ultrasound image from an ultrasound probe, the first ultrasound image comprising a perspective of a subject; […] and (e) displaying the identified remedial action to assist in acquisition of the final ultrasound image.” [0103]; The operator receives instructions to obtain the target anatomical view based on the intent information and analysis of the initial ultrasound image [0171-0180, 0225-0256], [fig. 9-12], [see claim 1 rejection]).
Regarding claim 3, de Jonge teaches the method of claim 1,
de Jonge further teaching wherein the plurality of ultrasound images are received and processed in real time (“The App may provide real-time guidance to the operator regarding how to properly position the ultrasound device on the subject to capture a medically relevant ultrasound image.” [0005]; “a method for real-time measurement prediction of ultrasound imaging is provided” [0101]; “The ultrasound data may be processed in real-time during a scanning session as the echo signals are received” [0308]; [0171-0180, 0225-0256], [fig. 9-12]).
Regarding claim 4, de Jonge teaches the method of claim 1,
de Jonge further teaching wherein the trained localization algorithm is a segmentation algorithm or an object detection algorithm (“Example automated image processing techniques include machine learning techniques such as deep learning techniques. In some embodiments, a convolutional neural network may be employed to determine whether an ultrasound image contains the target anatomical view.” [0149]; “For example, deep learning techniques may be employed to identify the presence of particular organs (such as a heart or a lung) in the ultrasound image.” [0171]; The deep learning technique may be a convolutional neural network which identifies organs (i.e., object detection) within the ultrasound image [0171-0180, 0225-0256], [fig. 9-12], [see claim 1 rejection]).
Regarding claim 6, de Jonge teaches the method of claim 1,
de Jonge further teaching wherein the input is further derived from one or more previous ultrasound-scans or ultrasound images for the first imaged region obtained or received before the received plurality of ultrasound images for the first imaged region (“The method comprises (a) receiving a training set comprising a plurality of medical images of a plurality of subjects and a training annotation associated with each of the plurality of medical images; and (b) training the convolutional neural network to regress one or more landmark locations based at least on the training set.” [0114]; “a training set containing N images and the associated ground truth annotations consisting of coordinates referring to P key-points which describe the position of landmarks may be employed. The training set may be used to first obtain the principal modes of variation of the coordinates in Y and then train a convolutional neural network that leverages it.” [0281]; The convolutional neural network may be trained using a plurality of medical images (i.e., previous ultrasound scans) to identify the target anatomical view and medical parameters before being employed by the operator of the ultrasound device [0171-0180, 0225-0256], [fig. 9-12], [see claim 1 rejection]).
Regarding claim 7, de Jonge teaches the method of claim 1,
de Jonge further teaching wherein the input is an identification of a structure or feature within the first imaged region (“The method comprises (a) receiving an acquisition intent instruction for a final ultrasound imagery;” [0103]; “(1) acquire medical information regarding the subject, (2) identify an anatomical view of the subject to image with the ultrasound device based on the acquired medical information regarding the subject,” [0209]; [0171-0180, 0225-0256], [fig. 9-12], [see claim 1 rejection]).
Regarding claim 8, de Jonge teaches the method of claim 1,
de Jonge further teaching further comprising:
receiving, from the user via the user interface in response to the provided localization report, modifying input about the first imaged region (“the image acquisition assistance screen may display one or more instructions regarding how to reposition the ultrasound device to obtain an ultrasound image that contains the target anatomical view (e.g., a PLAX view). Once the ultrasound device has been properly positioned, the image acquisition assistance screen may display an indication that the ultrasound device is properly positioned. When a suitable (clinically relevant) image(s) is obtained, the operator may confirm the acquisition via the “Confirm” button.” [0215]; The operator may hit confirm (i.e., modifying input) when the ultrasound device is properly positioned for acquisition of the target anatomical view [0171-0180, 0225-0256], [fig. 9-12]);
processing, using the trained localization algorithm, both: (i) one or more of the received plurality of ultrasound images and (ii) the received modifying input to generate a modified localization report for the first imaged region (“For a heart failure diagnostic application, the imaging instruction 722 may instruct the operator to begin an assisted ejection fraction (EF) measurement of the subject. […] The EF may be identified be computed by, for example, analyzing one or more ultrasound images of a heart of the subject.” [0214]; “The computing device 704 may transition from the image acquisition assistance screen […] once the ultrasound images have been confirmed by the operator.” [0216]; The computing device may receive ultrasound images and the operator confirmation to begin analysis of the ultrasound images [0171-0180, 0225-0256], [fig. 9-12], [see claim 1 rejection]); and
providing, via a user interface, the modified localization report to the user (“The diagnostic results screen may display diagnostic results 728, 732 determined from analyzing the captured ultrasound image 730.” [0216]; The diagnostic results are presented to the operator after user confirmation and ultrasound images are analyzed by deep learning technique [0171-0180, 0225-0256], [fig. 9-12], [see claim 1 rejection]).
Regarding claim 9, de Jonge teaches an ultrasound system configured to provide localization information (“A system, comprising: […] at least one instruction indicating how the operator is to reposition the ultrasound device; and” [clm 20]; “techniques for identifying a medical parameter of a subject using a captured ultrasound image may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device.” [0172]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17]), comprising:
a plurality of ultrasound images for a first imaged region (“the at least one processor is configured to generate the composite image at least in part by overlaying the ultrasound image captured by the ultrasound device onto the image of the ultrasound device.” [clm 28]; “In some embodiments, obtaining the ultrasound image of the subject comprises obtaining a plurality of ultrasound images of the subject, and wherein identifying the at least one anatomical feature of the subject comprises identifying a ventricle in each of at least some of the plurality of ultrasound images using a multi-layer neural network.” [0076]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]);
a user interface configured to receive a user input comprising location information about the first imaged region (“The computing device 704 may comprise an integrated display 706 that is configured to display one or more user interface screens of the diagnostic application.” [0210]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]),
wherein the user input is derived from a user's interaction with the ultrasound system while the user obtains the plurality of ultrasound images for the first imaged region, and wherein the input corresponds to or is derived from: (a) an ultrasound acquisition setting or (b) the user's interaction with an ultrasound probe of the ultrasound system (“obtain an image of the ultrasound device being used by the operator captured by the imaging device; generate a composite image at least in part by overlaying, onto the image of the ultrasound device, at least one instruction indicating how the operator is to reposition the ultrasound device;” [clm 20]; “the ultrasound image may be analyzed using the automated image processing technique to identify the anatomical view contained in the ultrasound image.” [0148]; “The act 1003 of generating the composite image may comprise an act 1004 of identifying a pose of an ultrasound device in the image and an act 1006 of overlaying the instruction onto the image using the identified pose.” [0233]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]);
a trained localization algorithm (“the computing device is configured to determine whether the ultrasound image contains the target anatomical view at least in part by analyzing the ultrasound image using a deep learning technique.” [0009]; “Example automated image processing techniques include machine learning techniques such as deep learning techniques. In some embodiments, a convolutional neural network may be employed” [0149]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]);
a processor (“at least one processor” [clm 20]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]) configured to
process, using the trained localization algorithm both: (i) one or more of the received plurality of ultrasound images and (ii) the received user input to generate a localization report for the first imaged region (“any of a variety of automated image processing techniques may be employed to determine whether an ultrasound image contains the target anatomical view. Example automated image processing techniques include machine learning techniques such as deep learning techniques.” [0149]; “a pose (e.g., position and/or orientation) of the ultrasound device in the captured image may be identified using an automated image processing technique (e.g., a deep learning technique) and the information regarding the pose of the ultrasound device may be used to overlay an instruction onto at least part of the ultrasound device in the captured image.” [0164]; “deep learning techniques may be employed to identify the presence of particular organs (such as a heart or a lung) in the ultrasound image. Once the organs in ultrasound image have been identified, the characteristics of the organs (e.g., shape and/or size) may be analyzed to determine a medical parameter of the subject” [0171]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]); and
direct the user interface to provide the localization report to the user (“cause the display to present the composite image to the operator.” [clm 20]; “The computing device 704 may comprise an integrated display 706 that is configured to display one or more user interface screens of the diagnostic application.” [0210]; “The image acquisition assistance screen may display an ultrasound image 726 captured using the ultrasound device. In some embodiments, the image acquisition assistance screen may display one or more instructions […] Once the ultrasound device has been properly positioned, the image acquisition assistance screen may display an indication that the ultrasound device is properly positioned.” [0215]; “The diagnostic results screen may display diagnostic results 728, 732 determined from analyzing the captured ultrasound image 730.” [0216]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]).
Regarding claim 10, de Jonge teaches the ultrasound system of claim 9,
de Jonge further teaching wherein the localization report comprises one or more of a coordinate, a contour, and a structure identification (“any of a variety of automated image processing techniques may be employed to determine whether an ultrasound image contains the target anatomical view. Example automated image processing techniques include machine learning techniques such as deep learning techniques.” [0149]; “a pose (e.g., position and/or orientation) of the ultrasound device in the captured image may be identified using an automated image processing technique (e.g., a deep learning technique) and the information regarding the pose of the ultrasound device may be used to overlay an instruction onto at least part of the ultrasound device in the captured image.” [0164]; “deep learning techniques may be employed to identify the presence of particular organs (such as a heart or a lung) in the ultrasound image. Once the organs in ultrasound image have been identified, the characteristics of the organs (e.g., shape and/or size) may be analyzed to determine a medical parameter of the subject” [0171]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]).
Regarding claim 12, de Jonge teaches the ultrasound system of claim 9, de Jonge further teaching wherein the user-input is derived from one or more previous ultrasound images for the first imaged region obtained or received before the received plurality of ultrasound images for the first imaged region (“The method comprises (a) receiving a training set comprising a plurality of medical images of a plurality of subjects and a training annotation associated with each of the plurality of medical images; and (b) training the convolutional neural network to regress one or more landmark locations based at least on the training set.” [0114]; “a training set containing N images and the associated ground truth annotations consisting of coordinates referring to P key-points which describe the position of landmarks may be employed. The training set may be used to first obtain the principal modes of variation of the coordinates in Y and then train a convolutional neural network that leverages it.” [0281]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 6 rejection]).
Regarding claim 13, de Jonge teaches the ultrasound system of claim 9, de Jonge further teaching wherein the user input is an identification of a structure or feature within the first imaged region (“The method comprises (a) receiving an acquisition intent instruction for a final ultrasound imagery;” [0103]; “(1) acquire medical information regarding the subject, (2) identify an anatomical view of the subject to image with the ultrasound device based on the acquired medical information regarding the subject,” [0209]; [0171-0193, 0225-0256], [fig. 9-12], [see claim 7 rejection]).
Regarding claim 14, de Jonge teaches a non-transitory computer readable storage medium having computer readable program code embodied therein for causing an ultrasound system to provide localization information (“At least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by at least one processor, cause the at least one processor to: obtain an image of an ultrasound device being used by an operator,” [clm 30]; “techniques for identifying a medical parameter of a subject using a captured ultrasound image may be embodied as a method that is performed by, for example, a computing device that is communicatively coupled to an ultrasound device.” [0172]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17]) by:
receiving a plurality of ultrasound images for a first imaged region (“In some embodiments, obtaining the ultrasound image of the subject comprises obtaining a plurality of ultrasound images of the subject, and wherein identifying the at least one anatomical feature of the subject comprises identifying a ventricle in each of at least some of the plurality of ultrasound images using a multi-layer neural network.” [0076]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]);
receiving an input comprising location information about the first imaged region wherein the input is derived from a user's interaction with the ultrasound system while the user obtains the plurality of ultrasound images for the first imaged region, and wherein the input corresponds to or is derived from: (a) an ultrasound acquisition setting or (b) the user's interaction with an ultrasound probe of the ultrasound system (“obtain an image of an ultrasound device being used by an operator, the image being captured by an imaging device different from the ultrasound device; generate a composite image at least in part by overlaying, onto the image of the ultrasound device, at least one instruction indicating how the operator is to reposition the ultrasound device; and” [clm 30]; “the ultrasound image may be analyzed using the automated image processing technique to identify the anatomical view contained in the ultrasound image.” [0148]; “The act 1003 of generating the composite image may comprise an act 1004 of identifying a pose of an ultrasound device in the image and an act 1006 of overlaying the instruction onto the image using the identified pose.” [0233]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]);
processing, using a trained localization algorithm, both: (i) one or more of the received plurality of ultrasound images and (ii) the received input to generate a localization report for the first imaged region, the localization report comprising one or more of a coordinate, a contour, and a structure identification (“any of a variety of automated image processing techniques may be employed to determine whether an ultrasound image contains the target anatomical view. Example automated image processing techniques include machine learning techniques such as deep learning techniques.” [0149]; “a pose (e.g., position and/or orientation) of the ultrasound device in the captured image may be identified using an automated image processing technique (e.g., a deep learning technique) and the information regarding the pose of the ultrasound device may be used to overlay an instruction onto at least part of the ultrasound device in the captured image.” [0164]; “deep learning techniques may be employed to identify the presence of particular organs (such as a heart or a lung) in the ultrasound image. Once the organs in ultrasound image have been identified, the characteristics of the organs (e.g., shape and/or size) may be analyzed to determine a medical parameter of the subject” [0171]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]); and
providing, via a user interface, the localization report to the user (“cause the display to present the composite image to the operator.” [clm 30]; “The computing device 704 may comprise an integrated display 706 that is configured to display one or more user interface screens of the diagnostic application.” [0210]; “The image acquisition assistance screen may display an ultrasound image 726 captured using the ultrasound device. In some embodiments, the image acquisition assistance screen may display one or more instructions […] Once the ultrasound device has been properly positioned, the image acquisition assistance screen may display an indication that the ultrasound device is properly positioned.” [0215]; “The diagnostic results screen may display diagnostic results 728, 732 determined from analyzing the captured ultrasound image 730.” [0216]; [0171-0193, 0225-0256], [fig. 1-2, 9-12, 15A-17], [see claim 1 rejection]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 5 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over the de Jonge as applied to claim 1 above, in view of Plakas et al. (US20160292848A1; 2016-10-06, hereinafter “Plakas”).
Regarding claim 5, de Jonge teaches the method of claim 1,
de Jonge further teaching the localization report generated by the trained localization algorithm [see claim 1 rejection];
but de Jonge may fail to teach the confidence score.
However, in the same field of endeavor, Plakas teaches a method for providing localization information using an ultrasound system (“A medical imaging data processing method, comprising: setting a plurality of seeds at different locations in medical image data; […] identifying at least one target region” [clm 20]; “apparatus 20 comprises an ultrasound machine 22 and associated probe 24. […] that are configured to obtain ultrasound image data that is suitable for 2D, 3D or 4D imaging.” [0018]; [0015-0051], [fig. 1-2, 4]);
Plakas further teaching wherein the localization report further comprises a confidence score generated by the trained localization algorithm (“The processing circuitry 36 discards the seeds 90 that are not determined to belong to follicle tissue. The processing circuitry 36 may keep only seeds 90 for which there is a high level of confidence that the seeds are inside follicles (based on the statistics calculated at stage 106).” [0049]; The confidence that seeds are correctly located within follicles depicted in the ultrasound image data is calculated during processing [0029-0077], [fig. 1-2, 4-8]).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the method for providing localization information using an ultrasound system taught by de Jonge with the confidence score as taught by Plakas. Holding the ultrasound device a few inches too high or too low on the subject may make the difference between capturing a medically relevant ultrasound image and capturing a medically irrelevant ultrasound image. As a result, non-expert operators of an ultrasound device may have considerable trouble capturing medically relevant ultrasound images of a subject (de Jonge [0004]). Automatic three-dimensional segmentation and measurement of follicles may save time and reduce measurement error, and in some circumstances be more accurate and/or repeatable than manual measurement. The user may be able to concentrate on achieving a good ultrasound scan image in real time without having to also perform manual follicle measurements (Plakas [0078]). The technology improvements described may enable, among other capabilities, focused diagnosis, early detection and treatment of conditions by an ultrasound system (de Jonge [0209]).
Regarding claim 11, de Jonge teaches the ultrasound system of claim 9, de Jonge further teaching wherein the localization report generated by the trained localization algorithm [see claim 1 rejection];
but de Jonge may fail to teach the confidence score.
However, in the same field of endeavor, Plakas teaches an ultrasound system configured to provide localization information (“A medical imaging data processing apparatus, comprising: setting circuitry configured to set a plurality of seeds at different locations in medical image data; […] region identifying circuitry configured to identify at least one target region” [clm 1]; “apparatus 20 comprises an ultrasound machine 22 and associated probe 24. […] that are configured to obtain ultrasound image data that is suitable for 2D, 3D or 4D imaging.” [0018]; [0015-0051], [fig. 1-2, 4]);
Plakas further teaching wherein the localization report further comprises a confidence score generated by the trained localization algorithm (“The processing circuitry 36 discards the seeds 90 that are not determined to belong to follicle tissue. The processing circuitry 36 may keep only seeds 90 for which there is a high level of confidence that the seeds are inside follicles (based on the statistics calculated at stage 106).” [0049]; [0029-0077], [fig. 1-2, 4-8], [see claim 5 rejection]).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify the system for providing localization information using an ultrasound system taught by de Jonge with the confidence score as taught by Plakas. Holding the ultrasound device a few inches too high or too low on the subject may make the difference between capturing a medically relevant ultrasound image and capturing a medically irrelevant ultrasound image. As a result, non-expert operators of an ultrasound device may have considerable trouble capturing medically relevant ultrasound images of a subject (de Jonge [0004]). Automatic three-dimensional segmentation and measurement of follicles may save time and reduce measurement error, and in some circumstances be more accurate and/or repeatable than manual measurement. The user may be able to concentrate on achieving a good ultrasound scan image in real time without having to also perform manual follicle measurements (Plakas [0078]). The technology improvements described may enable, among other capabilities, focused diagnosis, early detection and treatment of conditions by an ultrasound system (de Jonge [0209]).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to James F. McDonald III whose telephone number is (571)272-7296. The examiner can normally be reached M-F; 8AM-6PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chris Koharski can be reached at 5712727230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JAMES FRANKLIN MCDONALD III
Examiner
Art Unit 3797
/CHRISTOPHER KOHARSKI/Supervisory Patent Examiner, Art Unit 3797