Prosecution Insights
Last updated: April 19, 2026
Application No. 17/778,066

INTELLIGENT MEASUREMENT ASSISTANCE FOR ULTRASOUND IMAGING AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS

Final Rejection §101§103§112
Filed
May 19, 2022
Examiner
VIRK, ADIL PARTAP S
Art Unit
3798
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Koninklijke Philips N V
OA Round
6 (Final)
48%
Grant Probability
Moderate
7-8
OA Rounds
3y 2m
To Grant
89%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
102 granted / 213 resolved
-22.1% vs TC avg
Strong +41% interview lift
Without
With
+41.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
44 currently pending
Career history
257
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
38.8%
-1.2% vs TC avg
§102
13.6%
-26.4% vs TC avg
§112
31.0%
-9.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 213 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION This office action is in response to the communication received on 09/12/2025 concerning application no. 17/778,066 filed on 05/19/2022. Claims 1-2, 4-6, 9-10, 13-17, and 24-26 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 09/12/2025 have been fully considered but they are not persuasive. Regarding the 112(b) rejections of claims 4 and 5, Applicant argues that the claims are not indefinite as the specification discloses adjustment of the copied marker locations to the optimal locations and the measure is to optimize the cost function and is based on a probability distribution. Examiner disagrees. Applicant is reminded the rejection is under 112(b) and not 112(a). Nowhere in the claims is there an adjustment of the copied marker locations to the optimal locations and the measure is to optimize the cost function and is based on a probability distribution. Applicant’s argument would understand the features is without support. Assuming, arguendo, the specification could provide the basis for remedying a 112(b) issue, the specification does not establish that the first marker pair is propagated across the images and is based on the position data of the ultrasound array with respect to the different imaging planes and the determination of the 3D spatial data is the basis for the propagation. Examiner maintains the rejection. Applicant's arguments filed 09/12/2025 have been fully considered but they are not persuasive. Applicant argues that the application addresses a problem in an existing technology as it is allowing for optimal measurements for the fetal head. Applicant argues that there is a particular machine as the claims establish an ultrasound probe. Applicant further argues that the determination of the measurement is not routine, well-known, and conventional and passes under Step 2B. Examiner disagrees. MPEP 716.01(c) establishes “Arguments presented by the applicant cannot take the place of evidence in the record. In re Schulze, 346 F.2d 600, 602, 145 USPQ 716, 718 (CCPA 1965) and In re De Blauwe, 736 F.2d 699, 705, 222 USPQ 191, 196 (Fed. Cir. 1984).” The independent claims do not establish the optimal measurements of the fetal head. While the claim 15 establishes the measurement associated to a fetal head, such a determination is a mental process. It an be performed by assessing the radius of the head and multiplying it by 2π and thereby providing a circumference. A more accurate measure than a general approximation can also be provided via a mental process in the firm of the thread/string method, rolling method, numerical integration, or averaging. Examiner notes that ultrasound imaging has been a technology that has been present since 1942 when Dr. Karl Theodore Dussik published the first paper on transmission ultrasound investigation of the brain and commercial availability of ultrasound has been present since the 1960s1. The concept of ultrasound use in medical context is taught as a course in university. For example, University of Rochester provides BME 451 (Biomedical Ultrasound) and BME 453 (Ultrasound Imaging) which are described as “Presents the physical basis for the use of high-frequency sound in medicine. Topics include acoustic properties of tissue, sound propagation (both linear and nonlinear) in tissues, interaction of ultrasound with gas bodies (acoustic cavitation and contrast agents), thermal and non-thermal biological effects, ultrasonography, dosimetry, hyperthermia, and lithotripsy” and “Investigates the imaging techniques applied in state-of-the-art ultrasound imaging and their theoretical bases. Topics include linear acoustic systems, spatial impulse responses, the k-space formulation, methods of acoustic field calculation, dynamic focusing and apodization, scattering, the statistics of acoustic speckle, speckle correlation, compounding techniques, phase aberration correction, velocity estimation, and flow imaging”, respectively2. Furthermore, the history and fundamental principles of ultrasound are widely available in textbooks3. Specifically, the use of ultrasound probes for ultrasound imaging is clearly taught in this basics of ultrasound textbook. Figs. 7.8-10 clearly showing three ultrasound probes. Page 119 teaches that the “Functions of transducer: ● Used as both transmitter and receiver ● Transmission mode: converts an oscillating voltage into mechanical vibrations, which causes a series of pressure waves into the body ● Receiving mode: converts backscattered pressure waves into electrical signal.”. Furthermore, in the cited medical textbook it is stated that “Ultrasound imaging is based on the ‘pulse-echo’ principle in which a short burst of ultrasound is emitted from a transducer and directed into tissue. Echoes are produced as a result of the interaction of sound with tissue, and some of these travel back to the transducer. By timing the period elapsed between the emission of the pulse and the reception of the echo, the distance between the transducer and the echo-producing structure can be calculated and an image is formed” (emphasis added).4 Since, ultrasound probes have been used in medical imaging for over 70 years and have had commercial availability for around 60 years and the fundamentals of ultrasound imaging are widely available in academia, it can be concluded that the transmission of ultrasound and the generation of ultrasound images via an ultrasound probe is well known, routine, and conventional. Applicant’s allegation about the measurement not being routine, well-known, and conventional is unpersuasive. The analysis is performed under Step 2A, Prong 1 and not Step 2B. Applicant misunderstands the 101 analysis. The assessment of the distances and the determinations of the measurements is a mental process and is not assessed as an additional element. Assuming, arguendo, the assessment was to be assessed to determine if it was routine, well-known, and conventional, Applicant’s remarks are conclusory and without support. Again, MPEP 716.01(c) establishes “Arguments presented by the applicant cannot take the place of evidence in the record. In re Schulze, 346 F.2d 600, 602, 145 USPQ 716, 718 (CCPA 1965) and In re De Blauwe, 736 F.2d 699, 705, 222 USPQ 191, 196 (Fed. Cir. 1984).” Examiner maintains the rejection. Applicant’s arguments with respect to claims 1 and 17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-2, 4-6, 9-10, 13-17, and 24-26 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1 and 17 are recites “first marker pair”, “second marker pair”, and “third marker pair” (Also present in dependent claims 4-5 and13-14). Paragraphs 0007, 0068, 0073-75, 0079, and 0101 discuss pairs but are in the context of image-measurement pairs or image-segment pairs. The specification fails to disclose the markers being in pairs. Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claims 1 and 17 recites “a first marker pair on only a first image”, “second marker pair on only a second image”, and “third marker pair on only a third image”. While the specification discusses use of markers on multiple images, the specification does not disclose that there is a pair of markers that are solely on each respective image. Paragraph 0053 teaches that the measurement marker on the other images are propagation measurement markers. Paragraph 0057 establishes that the marker is a propagation of the marker in another image. Therefore, the claim contains subject matter which is not described in the specification in such a way as to reasonably convey to one with ordinary skill in the art that the inventor had possession of the claim invention at the time of filing. Claims that are not discussed above but are cited to be rejected under 35 U.S.C. 112(a) are also rejected because they inherit the deficiencies of the claims they respectively depend upon. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 4-6 and 24-26 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 is indefinite for the following reasons: Recites “propagate the first marker pair from the first image to the second image based on positional data of the ultrasound transducer array with respect to the different imaging planes”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art how the marker is propagated onto another image at a different imaging plane if it is of a different portion of the anatomical feature. Applicant is encouraged to provide consistent and clear language. Recites “propagate the first marker pair from the first image to the second image based on positional data of the ultrasound transducer array with respect to the different imaging planes”. This claim element is indefinite. Claim 1 establishes that the first pair is set only according to the first image. The propagation of the first pair onto the second pair establishes it onto the second image and conflicts with the language of the claim 1. Applicant is encouraged to provide consistent and clear language. Recites “wherein, to generate the second marker pair, the processor circuit configured to: propagate the first marker pair from the first image to the second image based on positional data of the ultrasound transducer array with respect to the different imaging planes”. This claim element is indefinite. The propagation of the first pair on the second image does not address in what manner the second pair is generation. It would be unclear to one with ordinary skill in the art what the generation of the second pair is in relation to the first pair propagation. Applicant is encouraged to provide consistent and clear language. Claim 5 is indefinite for the following reasons: Recites “propagate the first marker pair from the first image to the second image based on the 3D spatial data”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art how the marker is propagated onto another image at a different imaging plane if it is of a different portion of the anatomical feature. Applicant is encouraged to provide consistent and clear language. Recites “propagate the first marker pair from the first image to the second image based on the 3D spatial data”. This claim element is indefinite. Claim 1 establishes that the first pair is set only according to the first image. The propagation of the first pair onto the second pair establishes it onto the second image and conflicts with the language of the claim 1. Applicant is encouraged to provide consistent and clear language. Recites “wherein, to generate the second marker pair, the processor circuit configured to determine 3D spatial data for the first image and the second image based on the positional data of the ultrasound transducer array; and propagate the first marker pair from the first image to the second image based on the 3D spatial data”. This claim element is indefinite. The propagation of the first pair on the second image does not address in what manner the second pair is generation. It would be unclear to one with ordinary skill in the art what the generation of the second pair is in relation to the first pair propagation. Applicant is encouraged to provide consistent and clear language. Claim 24 is indefinite for the following reasons: Recites “wherein the final measurement is representative of an unacquired imaging plane of the patient's anatomy”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the claim is establishing the measurement itself to be an unacquired imaging plane or that it has a value that is an unacquired imaging plane would have. If it is the later, it would be further unclear if the unacquired imaging plane is actively claimed as the claim establishes that it is a plane that has not undergone acquisition. Applicant is encouraged to provide consistent and clear language. Claim 25 is indefinite for the following reasons: Recites “wherein the unacquired imaging plane is located between the different imaging planes”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the unacquired imaging plane is actively claimed as the claim establishes that it is a plane that has not undergone acquisition. Applicant is encouraged to provide consistent and clear language. Claim 26 is indefinite for the following reasons: Recites “wherein the unacquired imaging plane intersects one or more of the different imaging planes”. This claim element is indefinite. It would be unclear to one with ordinary skill in the art if the unacquired imaging plane is actively claimed as the claim establishes that it is a plane that has not undergone acquisition. Applicant is encouraged to provide consistent and clear language. Claims that are not discussed above but are cited to be rejected under 35 U.S.C. 112(b) are also rejected because they inherit the indefiniteness of the claims they respectively depend upon. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-2, 4-6, 9-10, 13-17, and 24-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite an ultrasound imaging system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “receive a user input positioning a first marker pair on only a first image of the set of images, wherein a distance between the first marker pair comprises a first measurement of the anatomical feature across a first portion of the anatomical feature; generate, a second marker pair on only a second image and a third marker pair on only a third image, wherein the first image, the second image, and the third image are representative of different imaging planes of the patient's anatomy, wherein a distance between the second marker pair comprises a second measurement of the anatomical feature across a second portion of the anatomical feature and a distance between the third marker pair comprises a third measurement of the anatomical feature across a third portion of the anatomical feature; determine a final measurement of the anatomical feature across a portion of the anatomical feature different than the first portion, the second portion, and the third portion such that the final measurement is different than the first measurement, the second measurement, and the third measurement”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the reception of a marker pair on the image with a distance between the pair that makes a first measurement and a generation of a marker pair for the second and third images that are the distances between the pairs that represent the measurement and a final measurement that is different from the other measurements. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites the following additional elements: “an ultrasound probe comprising an ultrasound transducer array; and a processor circuit in communication with the ultrasound transducer array, the processor circuit configured to: control the ultrasound transducer array to obtain a set of images of a three-dimensional (3D) volume of a patient's anatomy including an anatomical feature; provide the first image, the first marker pair, a second image of the set of images, and a third image of the set of images as an input to a predictive network trained on at least one of: a set of image and measurement data for feature measurement; or a set of image and segmentation data for image segmentation; and as an output of the predictive network, provide the final measurement to a display in communication with the processor circuit”. Reception of ultrasound images of a volume of a patient anatomy is a data gathering step that is a form of a pre-solution insignificant activity. The display of the determined second measurement data is a display step that merely amounts to a post-solution insignificant activity. The use of a processor using a trained predictive network, ultrasound array as part of an ultrasound probe, and a display does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception or display the output of judicial exception. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. In light of the above, claim 1 is ineligible. Claim 2 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 and Step 2A, Prong 1, Judicial Exception are discussed above in the claim 1 rejection. Claim 2 recites the following elements: “wherein the processor circuit configured to receive the user input from a user interface in communication with the processor circuit”. This claim element is a mere data gathering step which amounts to a pre-solution insignificant activity. The use of a processor does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception. This pre-solution insignificant activity does not integrate the judicial exception into a practical application nor does it contain an inventive step. In light of above, claim 2 is ineligible. Claim 4 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “propagate the first marker pair from the first image to the second image based on positional data of the ultrasound transducer array with respect to the different imaging planes”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the determination in the first measurement data in association in the following images based on positional information of the array in relation the imaging planes. This can be done by determining a boundary trace in relation to the array and assessing the following images based on that relationship. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites the following additional elements: “wherein, to generate the second marker pair, the processor circuit configured to”. The use of a processor does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. In light of the above, claim 4 is ineligible. Claim 5 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “determine 3D spatial data for the first image and the second image based on the positional data of the ultrasound transducer array; and propagate the first marker pair from the first image to the second image based on the 3D spatial data”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to determination of 3D spatial data for the images based on the position data and propagating that to other images. This can be done by determining a boundary trace of a 3D object in relation to the array and assessing the following images based on that relationship. Alternatively, the 3D data can be a spatial consideration of image data in relation to an array in three-dimensional space. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites the following additional elements: “wherein, to generate the second marker pair, the processor circuit configured to”. The use of a processor does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. In light of the above, claim 5 is ineligible. Claim 6 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “determine the positional data of the ultrasound transducer array with respect to the different imaging planes based on the inertial measurement data and an inertial- measurement-to-image transformation”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to determine the position data of the array in relation to imaging planes based on inertial measurement data and image transformation. This can be an assessment of the data in the imaging planes and assessing the change over time and in consideration of time, the positional data can be determined. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites the following additional elements: “wherein the ultrasound probe further comprises an inertial measurement tracker, wherein the processor circuit is configured to receive, from the inertial measurement tracker, inertial measurement data associated with the ultrasound transducer array and the different imaging planes, and wherein, to determine the 3D spatial data, the processor circuit configured to”. The reception of inertial measurement data is a data gathering step is a form of a pre-solution insignificant activity. The use of a processor, inertial measurement tracker (which the specification in paragraph 0005 states includes an accelerometer, gyroscope, and/or sensor), and a probe does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. In light of the above, claim 6 is ineligible. Claim 9 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “determine a statistic measure comprising at least one of a confidence metric of the first measurement, a confidence metric of the second measurement, a mean value of the first measurement and the second measurement, a variance of the first measurement and the second measurement, or a standard deviation of the first measurement and the second measurement”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the determination of a distance, a confidence metric, an average, variance or a standard deviation of determined measurements. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites the following additional elements: “wherein the processor circuit is configured to: provide the statistic measure to the display”. Display of the measurement is a display step that merely amounts to a post-solution insignificant activity. The use of a processor does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. In light of the above, claim 9 is ineligible. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 and Step 2A, Prong 1, Judicial Exception are discussed above in the claim 1 rejection. Claim 10 recites the following elements: “a user interface in communication with the processor circuit and configured to provide a selection associated with the final measurement”. This claim element is a mere data gathering step which amounts to a pre-solution insignificant activity. The use of an interface does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception. This pre-solution insignificant activity does not integrate the judicial exception into a practical application nor does it contain an inventive step. In light of above, claim 10 is ineligible. Claim 13 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 and Step 2A, Prong 1, Judicial Exception are discussed above in the claim 1 rejection. Claim 13 recites the following elements: “wherein the set of image and measurement data for the feature measurement comprises a set of image-measurement pairs, and wherein each image-measurement pair of the set of image-measurement pair includes an image in a sequence of images of a 3D anatomical volume and a measurement of a feature of the 3D anatomical volume for the image”. This claim element is a mere data gathering step which amounts to a pre-solution insignificant activity. This pre-solution insignificant activity does not integrate the judicial exception into a practical application nor does it contain an inventive step. In light of above, claim 13 is ineligible. Claim 14 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1 and Step 2A, Prong 1, Judicial Exception are discussed above in the claim 1 rejection. Claim 14 recites the following elements: “wherein the set of image and segmentation data for the image segmentation comprises a set of image-segment pairs, and wherein each image-segment pair of the set of image-segment pair includes an image in a sequence of images of a 3D anatomical volume and a segment of a feature of the 3D anatomical volume for the image”. This claim element is a mere data gathering step which amounts to a pre-solution insignificant activity. This pre-solution insignificant activity does not integrate the judicial exception into a practical application nor does it contain an inventive step. In light of above, claim 14 is ineligible. Claim 15 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “wherein the anatomical feature includes a fetal head, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a circumference of the fetal head or a length of the fetal head”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to measurement of a circumference or a length of a fetal head. The human mind can measure distance or assess the circumference of an object. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim does not contain additional elements. Therefore, the claim does not integrate the judicial exception into a practical application. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the claim does not contain additional elements. For these reasons, there is no inventive concept in the claim. In light of the above, claim 15 is ineligible. Claim 16 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “wherein the anatomical feature includes a left ventricle, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a width, a height, an area, or a volume of the left ventricle”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the measurement of a width, height, area, or volume of a left ventricle. The human mind can measure width, height, area, or volume based on assessment of the image data. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim does not contain additional elements. Therefore, the claim does not integrate the judicial exception into a practical application. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the claim does not contain additional elements. For these reasons, there is no inventive concept in the claim. In light of the above, claim 16 is ineligible. Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite method of ultrasound imaging and therefore, is a method. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “receiving, at the processor circuit, a user input positioning a first marker pair on only a first image of the set of images, wherein a distance between the first marker pair comprises a first measurement of the anatomical feature across a first portion of the anatomical feature; generating, as an output of the predictive network, a second marker pair on only a second image and a third marker pair on only a third image, wherein the first image, the second image, and the third image are representative of different imaging planes of the patient's anatomy, wherein a distance between the second marker pair comprises a second measurement of the anatomical feature across a second portion of the anatomical feature and a distance between the third marker pair comprises a third measurement of the anatomical feature across a third portion of the anatomical feature; determining a final measurement of the anatomical feature across a portion of the anatomical feature different than the first portion, the second portion, and the third portion such that the final measurement is different than the first measurement, the second measurement, and the third measurement”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the reception of a marker pair on the image with a distance between the pair that makes a first measurement and a generation of a marker pair for the second and third images that are the distances between the pairs that represent the measurement and a final measurement that is different from the other measurements. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim recites the following additional elements: “controlling, with a processor circuit, an ultrasound transducer array to obtain a set of images of a three-dimensional (3D) volume of a patient's anatomy including an anatomical feature; providing the first image, the first marker pair, a second image of the set of images, and a third image of the set of images as an input to a predictive network trained on at least one of: a set of image and measurement data for feature measurement; or a set of image and segmentation data for image segmentation; and providing the final measurement to a display in communication with the processor circuit”. Reception of ultrasound images of a volume of a patient anatomy is a data gathering step that is a form of a pre-solution insignificant activity. The display of the determined second measurement data is a display step that merely amounts to a post-solution insignificant activity. The use of a processor using a trained predictive network, ultrasound array as part of an ultrasound probe, and a display does not integrate the judicial exception into a practical application as it is merely used to perform the judicial exception or display the output of judicial exception. These additional elements, taken individually or in combination, merely amount to insignificant pre/post-solution activities and do not integrate the judicial exception into a practical application. This claim is therefore directed to an abstract idea. Step 2B, Inventive Concept: No - Similarly to Step 2A Prong 2, the additional claim elements merely recite insignificant extra-solution activities, which do not amount to significantly more than the judicial exception. For these reasons, there is no inventive concept in the claim. In light of the above, claim 17 is ineligible. Claim 24 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “wherein the final measurement is representative of an unacquired imaging plane of the patient's anatomy”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the measurement being representative of an unimaged plane of anatomy. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim does not contain additional elements. Therefore, the claim does not integrate the judicial exception into a practical application. Step 2B, Inventive Concept: No - Similar to Step 2A Prong 2, the claim does not contain additional elements. For these reasons, there is no inventive concept in the claim. In light of the above, claim 24 is ineligible. Claim 25 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “wherein the unacquired imaging plane is located between the different imaging planes”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the unimaged plane being an interpolation between imaging planes. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim does not contain additional elements. Therefore, the claim does not integrate the judicial exception into a practical application. Step 2B, Inventive Concept: No - Similar to Step 2A Prong 2, the claim does not contain additional elements. For these reasons, there is no inventive concept in the claim. In light of the above, claim 25 is ineligible. Claim 24 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Statutory Category: Yes - The claims recite a system and therefore, is an apparatus. Step 2A, Prong 1, Judicial Exception: Yes - The claim recites the limitation “wherein the unacquired imaging plane intersects one or more of the different imaging planes”. This limitation, as drafted, is a process step that, under its broadest reasonable interpretation, covers the performance of the limitation in the mind as it is regarding a concept relating to the unimaged plane being an interpolation with intersection to imaging planes. That is, nothing in the claim element precludes the step from practically being performed in the mind and/or being performed with the aid of a pen and paper. Accordingly, the claim recites a mental process-type abstract idea. Step 2A, Prong 2, Integrated into Practical Application: No - The claim does not contain additional elements. Therefore, the claim does not integrate the judicial exception into a practical application. Step 2B, Inventive Concept: No - Similar to Step 2A Prong 2, the claim does not contain additional elements. For these reasons, there is no inventive concept in the claim. In light of the above, claim 26 is ineligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 10, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159). Regarding claim 1, Gerard teaches an ultrasound imaging system comprising: an ultrasound probe comprising an ultrasound transducer array (Paragraph 0018 teaches the use of a probe, memory, processor and interface. See Fig. 1); and a processor circuit in communication with the ultrasound transducer array (Paragraph 0018 teaches the use of a probe, memory, processor and interface. See Fig. 1), the processor circuit configured to: control the ultrasound transducer array to obtain a set of images of a three-dimensional (3D) volume of a patient's anatomy including an anatomical feature (Paragraph 0022 teaches that the image data can be in 3D and is of the patient anatomy. See Fig. 2); receive a user input positioning a first marker pair on only a first image of the set of images, wherein a distance between the first marker pair comprises a first measurement of the anatomical feature across a first portion of the anatomical feature (Paragraph 0065 teaches that the user can set cursors that are indicating the distance within the measurement window and indicates the distance for the anatomical measurements. See Fig. 6); provide the first image, the first marker pair, a second image of the set of images, and a third image of the set of images as an input to a predictive network trained on at least one of: a set of image and measurement data for feature measurement; or a set of image and segmentation data for image segmentation (Paragraph 0026 teaches that the neural network outputs an additional image with the differing anatomical markers and the analysis of the characteristics of the pixels to account for the field of view that was originally shown in the input image. Paragraphs 0027-28 teach that the algorithm is trained on imaging data of differing FOVs of the anatomical interest and is able to identify and assign anatomical markers based on shape, position, intensity and the like). However, Gerard is silent regarding an ultrasound system, comprising: generate, as an output of the predictive network, a second marker pair on only a second image and a third marker pair on only a third image, wherein the first image, the second image, and the third image are representative of different imaging planes of the patient's anatomy, wherein a distance between the second marker pair comprises a second measurement of the anatomical feature across a second portion of the anatomical feature and a distance between the third marker pair comprises a third measurement of the anatomical feature across a third portion of the anatomical feature; determine a final measurement of the anatomical feature across a portion of the anatomical feature different than the first portion, the second portion, and the third portion such that the final measurement is different than the first measurement, the second measurement, and the third measurement; and provide the final measurement to a display in communication with the processor circuit. In an analogous imaging field of endeavor, regarding ultrasound imaging and position tracking, Voigt teaches a system, comprising: provide the first image, the first marker pair, a second image of the set of images, and a third image of the set of images as an input to a predictive network trained on at least one of: a set of image and measurement data for feature measurement; or a set of image and segmentation data for image segmentation (Paragraph 0056 teaches that the machine learnt classifier is trained to find the anatomical landmarks based on the user input of the location on the image and the resulting ray or rays. The classifier is able to “snap” onto the landmark. Paragraph 0064 teaches that in act 70, the identified 2D or 3D location is tracked throughout the sequence of images. Paragraph 0041 teaches that a plurality of rays can be defined for ray tracing. For example, four, eight, or sixteen rays extending in one or multiple directions are defined. Paragraphs 0061 and 0094 teaches a plurality of position can be utilized for measurement. Fig. 4 shows a sequence of images. Paragraph 0023 teaches that the steps of Fig. 1 are performed for all the planes that are acquired. Paragraph 0055 teaches that the classifier can be trained with training data with different organs or scanning applications. The training can be with features with regards to heart chambers and/or valve scans. The training allows for the classifier to find points along organ boundaries. Paragraph 0004 teaches that the machine learning tech can be trained with specific segmentation and/or measurements); generate, as an output of the predictive network, a second marker pair on only a second image and a third marker pair on only a third image, wherein the first image, the second image, and the third image are representative of different imaging planes of the patient's anatomy, wherein a distance between the second marker pair comprises a second measurement of the anatomical feature across a second portion of the anatomical feature and a distance between the third marker pair comprises a third measurement of the anatomical feature across a third portion of the anatomical feature (Paragraphs 0036-37 teach that the measurement caliper is applied on the medical image. Paragraph 0038 teaches that the boundary can also be traced. Paragraphs 0039-42 teach the performance of ray tracing based on the selected position and boundaries. Paragraph 0023 teaches that these are performed on the same set representing a plane or volume for acts 64-68. Paragraph 0023 teaches that the act 70 is performed instead of acts 64-68 after the location has been identified on one data set. Paragraph 0064 teaches that in act 70, the identified 2D or 3D location is tracked throughout the sequence of images. This is based on the position and depth. Paragraph 0055 teaches that the machine learnt classifier is trained to identify a depth along a view direction form ultrasound data. Paragraph 0019 teaches that the machine learning is able to determine the landmark and the boundary. Paragraph 0074 teaches that the boundary is determined and the other measurements like length or circumference, area, minimum diameter, and/or maximum diameters, and volume are based on the contour boundary. Fig. 4 shows the boundary to be about the outside of the anatomical feature. The area, diameter, and volume encompass the portions of the anatomical feature within the boundary. Paragraph 0041 teaches that a plurality of rays can be defined for ray tracing. For example, four, eight, or sixteen rays extending in one or multiple directions are defined. Paragraphs 0061 and 0094 teaches a plurality of position can be utilized for measurement. Paragraph 0023 teaches that the steps of Fig. 1 are performed for all the planes that are acquired); determine a final measurement of the anatomical feature across a portion of the anatomical feature different than the first portion, the second portion, and the third portion such that the final measurement is different than the first measurement, the second measurement, and the third measurement (Paragraphs 0043-44 teaches that the locations can be interpolated between the user input position. The interpolation can be along the lint amongst the intensity of the images. The ray can define a sub-set of scan data in a cylinder, cone, or other shape about the ray. Paragraph 0074 teaches that the Fig. 4 is according to the curve to fit the points and interpolated from the points); and provide the final measurement to a display in communication with the processor circuit (Paragraph 0076-77 teaches the display of the quantity and the tracking of the locations). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gerard with Voigt’s teaching of a trained predictive network. This modified apparatus would allow the user to accurately find the landmarks in a robust and versatile manner (Paragraph 0019 of Voigt). Furthermore, the modification allows for regular anatomical structures to be efficiently and robustly modeled in fully-automatic or semi-automatic ways (Paragraph 0004 of Voigt). Regarding claim 2, modified Gerard teaches a system according to claim 1, as discussed above. Gerard further teaches a system, wherein the processor circuit configured to receive the user input from a user interface in communication with the processor circuit (Paragraph 0065 teaches that the user can set cursors that are indicating the distance within the measurement window and indicates the distance for the anatomical measurements. See Fig. 6). Regarding claim 10, modified Gerard teaches the system in claim 1, as discussed above. Gerard further teaches a system, further comprising: a user interface in communication with the processor circuit and configured to provide a selection associated with the final measurement (Paragraph 0065 teaches that the user can set cursors that are indicating the distance within the measurement window and indicates the distance for the anatomical measurements. See Fig. 6). Regarding claim 16, modified Gerard teaches the system in claim 1, as discussed above. Gerard further teaches a system, wherein the anatomical feature includes a left ventricle, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a width, a height, an area, or a volume of the left ventricle (Paragraph 0053 teaches the assessment of the left ventricle. Paragraph 0026 teaches that the neural network outputs an additional image with the differing anatomical markers and the analysis of the characteristics of the pixels to account for the field of view that was originally shown in the input image. Paragraphs 0027-28 teach that the algorithm is trained on imaging data of differing FOVs of the anatomical interest and is able to identify and assign anatomical markers based on shape, position, intensity and the like. Paragraph 0044 teaches the assessment of the volume, area, diameter, and thickness). Regarding claim 17, Gerard teaches a method of ultrasound imaging, comprising: controlling, with a processor circuit, an ultrasound transducer array to obtain a set of images of a three-dimensional (3D) volume of a patient's anatomy including an anatomical feature (Paragraph 0022 teaches that the image data can be in 3D and is of the patient anatomy. See Fig. 20. Paragraph 0018 teaches the use of a probe, memory, processor and interface. See Fig. 1); receiving, at the processor circuit, a user input positioning a first marker pair on only a first image of the set of images, wherein a distance between the first marker pair comprises a first measurement of the anatomical feature across a first portion of the anatomical feature (Paragraph 0065 teaches that the user can set cursors that are indicating the distance within the measurement window and indicates the distance for the anatomical measurements. See Fig. 6. Paragraph 0018 teaches the use of a probe, memory, processor and interface. See Fig. 1); providing the first image, the first marker pair, a second image of the set of images, and a third image of the set of images as an input to a predictive network trained on at least one of: a set of image and measurement data for feature measurement; or a set of image and segmentation data for image segmentation (Paragraph 0026 teaches that the neural network outputs an additional image with the differing anatomical markers and the analysis of the characteristics of the pixels to account for the field of view that was originally shown in the input image. Paragraphs 0027-28 teach that the algorithm is trained on imaging data of differing FOVs of the anatomical interest and is able to identify and assign anatomical markers based on shape, position, intensity and the like). However, Gerard is silent regarding a method, generating, as an output of the predictive network, a second marker pair on only a second image and a third marker pair on only a third image, wherein the first image, the second image, and the third image are representative of different imaging planes of the patient's anatomy, wherein a distance between the second marker pair comprises a second measurement of the anatomical feature across a second portion of the anatomical feature and a distance between the third marker pair comprises a third measurement of the anatomical feature across a third portion of the anatomical feature; determining a final measurement of the anatomical feature across a portion of the anatomical feature different than the first portion, the second portion, and the third portion such that the final measurement is different than the first measurement, the second measurement, and the third measurement; and providing the final measurement to a display in communication with the processor circuit. In an analogous imaging field of endeavor, regarding ultrasound imaging and position tracking, Voigt teaches a method, providing the first image, the first marker pair, a second image of the set of images, and a third image of the set of images as an input to a predictive network trained on at least one of: a set of image and measurement data for feature measurement; or a set of image and segmentation data for image segmentation (Paragraph 0056 teaches that the machine learnt classifier is trained to find the anatomical landmarks based on the user input of the location on the image and the resulting ray or rays. The classifier is able to “snap” onto the landmark. Paragraph 0064 teaches that in act 70, the identified 2D or 3D location is tracked throughout the sequence of images. Paragraph 0041 teaches that a plurality of rays can be defined for ray tracing. For example, four, eight, or sixteen rays extending in one or multiple directions are defined. Paragraphs 0061 and 0094 teaches a plurality of position can be utilized for measurement. Fig. 4 shows a sequence of images. Paragraph 0023 teaches that the steps of Fig. 1 are performed for all the planes that are acquired. Paragraph 0055 teaches that the classifier can be trained with training data with different organs or scanning applications. The training can be with features with regards to heart chambers and/or valve scans. The training allows for the classifier to find points along organ boundaries. Paragraph 0004 teaches that the machine learning tech can be trained with specific segmentation and/or measurements); generating, as an output of the predictive network, a second marker pair on only a second image and a third marker pair on only a third image, wherein the first image, the second image, and the third image are representative of different imaging planes of the patient's anatomy, wherein a distance between the second marker pair comprises a second measurement of the anatomical feature across a second portion of the anatomical feature and a distance between the third marker pair comprises a third measurement of the anatomical feature across a third portion of the anatomical feature (Paragraphs 0036-37 teach that the measurement caliper is applied on the medical image. Paragraph 0038 teaches that the boundary can also be traced. Paragraphs 0039-42 teach the performance of ray tracing based on the selected position and boundaries. Paragraph 0023 teaches that these are performed on the same set representing a plane or volume for acts 64-68. Paragraph 0023 teaches that the act 70 is performed instead of acts 64-68 after the location has been identified on one data set. Paragraph 0064 teaches that in act 70, the identified 2D or 3D location is tracked throughout the sequence of images. This is based on the position and depth. Paragraph 0055 teaches that the machine learnt classifier is trained to identify a depth along a view direction form ultrasound data. Paragraph 0019 teaches that the machine learning is able to determine the landmark and the boundary. Paragraph 0074 teaches that the boundary is determined and the other measurements like length or circumference, area, minimum diameter, and/or maximum diameters, and volume are based on the contour boundary. Fig. 4 shows the boundary to be about the outside of the anatomical feature. The area, diameter, and volume encompass the portions of the anatomical feature within the boundary. Paragraph 0041 teaches that a plurality of rays can be defined for ray tracing. For example, four, eight, or sixteen rays extending in one or multiple directions are defined. Paragraphs 0061 and 0094 teaches a plurality of position can be utilized for measurement. Paragraph 0023 teaches that the steps of Fig. 1 are performed for all the planes that are acquired); determining a final measurement of the anatomical feature across a portion of the anatomical feature different than the first portion, the second portion, and the third portion such that the final measurement is different than the first measurement, the second measurement, and the third measurement; and providing the final measurement to a display in communication with the processor circuit (Paragraphs 0043-44 teaches that the locations can be interpolated between the user input position. The interpolation can be along the lint amongst the intensity of the images. The ray can define a sub-set of scan data in a cylinder, cone, or other shape about the ray. Paragraph 0074 teaches that the Fig. 4 is according to the curve to fit the points and interpolated from the points. Paragraph 0076-77 teaches the display of the quantity and the tracking of the locations). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Gerard with Voigt’s teaching of trained predictive network. This modified apparatus would allow the user to accurately find the landmarks in a robust and versatile manner (Paragraph 0019 of Voigt). Furthermore, the modification allows for regular anatomical structures to be efficiently and robustly modeled in fully-automatic or semi-automatic ways (Paragraph 0004 of Voigt). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of White et al. (PGPUB No. US 2017/0325783). Regarding claim 4, modified Gerard teaches the system in claim 1, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein, to generate the second marker pair, the processor circuit configured to: propagate the first marker pair from the first image to the second image based on positional data of the ultrasound transducer array with respect to the different imaging planes. In an analogous imaging field of endeavor, regarding ultrasound imaging and position tracking, White teaches a system, wherein, to generate the second marker pair, the processor circuit configured to: propagate the first marker pair from the first image to the second image based on positional data of the ultrasound transducer array with respect to the different imaging planes (Abstract teaches that the ultrasound image data is acquired in discrete time increments at one or more positions relative to a subject. Control points are added by a user for two or more image frames and a processor interpolates the location of the control points for image frames obtained at in-between times. See Fig. 2. Paragraph 0049 teaches that the probe is moved at varying distances and used in the process of generating a boundary of one ore more anatomical structures where the boundary is generated on the other image frames based on the user input information. See Fig. 7. Claim 8 teaches wherein the interpolating is weighted based on the distances between the second position and the first and third positions, respectively). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with White’s teaching of the propagation to the other images based on array positioning. This modified apparatus would allow the user to improve quality of the frames (Paragraph 0042 of White). Furthermore, the modification will allow the user to more accurately discern the anatomical boundary (Paragraph 0016 of White). Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of White et al. (PGPUB No. US 2017/0325783) further in view of Washburn et al. (PGPUB No. US 2008/0221446). Regarding claim 5, modified Gerard teaches the system in claim 4, as discussed above. However, the combination of Gerard, Voigt, and White is silent regarding a system, wherein, to generate the second marker pair, the processor circuit configured to: determine 3D spatial data for the first image and the second image based on the positional data of the ultrasound transducer array; and propagate the first marker pair from the first image to the second image based on the 3D spatial data. In an analogous imaging field of endeavor, regarding multiplane ultrasound image acquisition and volume data analysis based on points of interest, Washburn teaches a system, wherein, to generate the second marker pair, the processor circuit configured to: determine 3D spatial data for the first image and the second image based on the positional data of the ultrasound transducer array (Paragraph 0021 teaches that the imaging can be done with 3D scanning. Paragraph 0040 teaches that the tracking point can be placed on the 3D image and tracked. The image can be tracked based on the probe viewing. Paragraph 0026 teaches that a position sensor can be on or within an ultrasound probe 106 and is able to track the position information of the probe); and propagate the first marker pair from the first image to the second image based on the 3D spatial data (Paragraph 0055 teaches that the structure on the 3D image can be marked with the tracking point and the point can help return to the point of interest and the determination of a new view. Paragraph 0071 teaches that a 3D view assists in the placement of target points. Claim 8 teaches that subsequent image display and the position of the tracking points is done based on the reference coordinate system). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard, Voigt, and White with Washburn’s teaching of determine 3D spatial data based on position data and propagating the measurement data based on the 3D spatial data. This modified apparatus would allow the user to improve the ability of the user to locate the structure of interest from multiple views would improve diagnostic confidence and improve efficiency (Paragraph 0005 of Washburn). Furthermore, the modification easy determination of objects to avoid during interventional procedures (Paragraph 0073 of Washburn). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of White et al. (PGPUB No. US 2017/0325783) further in view of Washburn et al. (PGPUB No. US 2008/0221446) further in view of Bauer et al. (PGPUB No. US 2022/0148199). Regarding claim 6, modified Gerard teaches the system in claim 5, as discussed above. However, the combination of Gerard, Voigt, White, and Washburn is silent regarding a system, wherein the ultrasound probe further comprises an inertial measurement tracker, wherein the processor circuit is configured to receive, from the inertial measurement tracker, inertial measurement data associated with the ultrasound transducer array and the different imaging planes, and wherein, to determine the 3D spatial data, the processor circuit configured to: determine the positional data of the ultrasound transducer array with respect to the different imaging planes based on the inertial measurement data and an inertial- measurement-to-image transformation. In an analogous imaging field of endeavor, regarding multiplane ultrasound image acquisition and volume data analysis based on points of interest, Washburn teaches a system, wherein the ultrasound probe further comprises an inertial measurement tracker (Paragraph 0086 teaches that an IMU sensor with accelerometers and gyroscopes can be used in the determination of motion. Paragraph 0126 teaches that the sensors are attached to the probe. Paragraph 0089 teaches that the probe is an array probe), wherein the processor circuit (Paragraph 0148 teaches that hardware is used to perform the system functions. Furthermore, a processor is inherently present for operation of computational systems) is configured to receive, from the inertial measurement tracker, inertial measurement data associated with the ultrasound transducer array and the different imaging planes (Paragraph 0167 teaches that the image and the IMU data is fused together. Paragraph 0086 teaches that an IMU sensor with accelerometers and gyroscopes can be used in the determination of motion of the probe), and wherein, to determine the 3D spatial data, the processor circuit configured to: determine the positional data of the ultrasound transducer array with respect to the different imaging planes based on the inertial measurement data and an inertial- measurement-to-image transformation (Paragraph 0034 teaches that the motion of the probe between times points can be represented based on the relative transformation of the coordinate systems of the multiple frames. This can be done for all the images. Paragraph 0038 teaches that the motion of the probe is estimated based on the image analysis of the frames. Paragraph 0029 teaches that the frames can be 3D images. Paragraph 0086 teaches that an IMU sensor with accelerometers and gyroscopes can be used in the determination of motion of the probe. Paragraph 0167 teaches that the image and the IMU data is fused together). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard, Voigt, White, and Washburn with Bauer’s teaching of the use of an inertial measurement tracker that is used to determine 3D spatial data and determine the position information of the array based on inertial and transformation information. This modified apparatus would allow the user to improvement of accuracy for tracking (Paragraph 0089 of Bauer). Furthermore, the modification probe tracking is efficient (Paragraph 0010 of Bauer). Furthermore, the noise can be reduced by using the accelerometers (Paragraph 0086 of Bauer). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of Kapoor et al. (PGPUB No. US 2015/0133784). Regarding claim 9, modified Gerard teaches the system in claim 1, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the processor circuit is configured to: determine a statistic measure comprising at least one of a confidence metric of the first measurement, a confidence metric of the second measurement, a mean value of the first measurement and the second measurement, a variance of the first measurement and the second measurement, or a standard deviation of the first measurement and the second measurement; and provide the statistic measure to the display. In an analogous imaging field of endeavor, regarding multiplane ultrasound image acquisition and volume data analysis based on points of interest, Kapoor teaches a system, wherein the processor circuit is configured to: determine a statistic measure comprising at least one of a confidence metric of the first measurement, a confidence metric of the second measurement, a mean value of the first measurement and the second measurement, a variance of the first measurement and the second measurement, or a standard deviation of the first measurement and the second measurement; and provide the statistic measure to the display (Paragraphs 0095-96 teaches the measure and display of the average measurements. Paragraph 0110 teaches the display of confidence values). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Kapoor’s teaching of the display of statistical information. This modified apparatus would allow the user to improve 3D imaging (Paragraph 0023 of Kapoor). Furthermore, the modification achieves high quality 3D imaging and reconstruction (Paragraph 0021 of Kapoor). Claims 13-14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of Hansegård et al. ("Constrained Active Appearance Models for Segmentation of Triplane Echocardiograms", October 2007, IEEE Transactions on Medical Imaging, Vol. 26, No. 10, pages 1391-1400). Regarding claim 13, modified Gerard teaches the system in claim 1, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the set of image and measurement data for the feature measurement comprises a set of image-measurement pairs, and wherein each image-measurement pair of the set of image-measurement pair includes an image in a sequence of images of a 3D anatomical volume and a measurement of a feature of the 3D anatomical volume for the image. In an analogous imaging field of endeavor, regarding ultrasound image analysis based on points of interest on an anatomical volume, Hansegård teaches a system, wherein the set of image and measurement data for the feature measurement comprises a set of image-measurement pairs, and wherein each image-measurement pair of the set of image-measurement pair includes an image in a sequence of images of a 3D anatomical volume and a measurement of a feature of the 3D anatomical volume for the image (Abstract teaches that multifare active appearance models (AAM) for left ventricle segmentation are used and the edge detection is done based on segmentation algorithms. Paragraph 5 of the Introduction teaches that the AAMs can also consider the object texture. Paragraph 1 of the “Independent Active Appearance Models” teaches that the training can be done based on texture parameters and utilize equation 3. Paragraph 3 of “Basic Active Appearance Models” teaches that the equation 3 is performing the parameter defining in association to an image frame. Paragraph 3 of “Multiview AAMMs” teaches that the texture model is similar to the shape model and the image patches are worked onto their corresponding frames. Paragraph 4 of the “Model Training” teaches the training of the models in accordance to shape and texture. “Experiments” teaches the implementation of the models and the assessment of the landmarks and the data on the captured views. The texture was averaged. Fig. 3 shows the imaging is triplanar and performed for the cardiac structures, which are volumetric). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Hansegård’s teaching of a predictive network trained on feature image pair data. This modified apparatus would allow the user to obtain Multiview data that has improved results and improved measurements (Conclusion of Hansegård). Furthermore, the modification has higher accuracy and improved volumetric determinations (Introduction of Hansegård). Regarding claim 14, modified Gerard teaches the system in claim 1, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the set of image and segmentation data for the image segmentation comprises a set of image-segment pairs, and wherein each image-segment pair of the set of image-segment pair includes an image in a sequence of images of a 3D anatomical volume and a segment of a feature of the 3D anatomical volume for the image. In an analogous imaging field of endeavor, regarding ultrasound image analysis based on points of interest on an anatomical volume, Hansegård teaches a system, wherein the set of image and segmentation data for the image segmentation comprises a set of image-segment pairs, and wherein each image-segment pair of the set of image-segment pair includes an image in a sequence of images of a 3D anatomical volume and a segment of a feature of the 3D anatomical volume for the image (Abstract teaches that multifare active appearance models (AAM) for left ventricle segmentation are used and the edge detection is done based on segmentation algorithms. Paragraph 1 of “Multiview AAMMs” teaches that the shape vector is done per view for all the frames. Paragraph 4 of the “Model Training” teaches the training of the models in accordance to shape and texture. “Experiments” teaches the implementation of the models and the assessment of the landmarks and the data on the captured views. Fig. 3 shows the imaging is triplanar and performed for the cardiac structures, which are volumetric). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Hansegård’s teaching of a predictive network trained on segment image pair data. This modified apparatus would allow the user to obtain Multiview data that has improved results and improved measurements (Conclusion of Hansegård). Furthermore, the modification has higher accuracy and improved volumetric determinations (Introduction of Hansegård). Regarding claim 16, modified Gerard teaches the system in claim 1, as discussed above. Gerard further teaches a system, wherein the anatomical feature includes a left ventricle, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a width, a height, an area, or a volume of the left ventricle (Paragraph 0053 teaches the assessment of the left ventricle. Paragraph 0026 teaches that the neural network outputs an additional image with the differing anatomical markers and the analysis of the characteristics of the pixels to account for the field of view that was originally shown in the input image. Paragraphs 0027-28 teach that the algorithm is trained on imaging data of differing FOVs of the anatomical interest and is able to identify and assign anatomical markers based on shape, position, intensity and the like. Paragraph 0044 teaches the assessment of the volume, area, diameter, and thickness). In an analogous imaging field of endeavor, regarding ultrasound image analysis based on points of interest on an anatomical volume, Hansegård teaches a system, wherein the anatomical feature includes a left ventricle, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a width, a height, an area, or a volume of the left ventricle (Paragraph 2 of “Model Training” teaches that the smallest and largest volumes of the LV are determined for the frames. Paragraph 2 of “Active Appearance Models Constrained by DP” teaches that the LV volume estimates are determined by the AAMM). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Hansegård’s teaching of measure of volume of the left ventricle. This modified apparatus would allow the user to obtain multiview data that has improved results and improved measurements (Conclusion of Hansegård). Furthermore, the modification has higher accuracy and improved volumetric determinations (Introduction of Hansegård). Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of further in view of Li et al. (PGPUB No. US 2020/0134825). Regarding claim 15, modified Gerard teaches the system in claim 1, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the anatomical feature includes a fetal head, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a circumference of the fetal head or a length of the fetal head. In an analogous imaging field of endeavor, regarding ultrasound image analysis based on points of interest on an anatomical volume, Li teaches a system, wherein the anatomical feature includes a fetal head, and wherein the first measurement, the second measurement, the third measurement, and the final measurement are associated with at least one of a circumference of the fetal head or a length of the fetal head (Paragraph 0038 teaches that the length and circumference of a structure can be measured. Paragraphs 0115-0116 teaches the measure of the diameter and circumference of a fetal head. See Fig. 11). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Li’s teaching of measure circumference and length of the fetal head. This modified apparatus would allow the user to monitor and assess fetal growth over time and make sure it is favorable (Paragraph 0115 of Li). Furthermore, the modification provides an apparatus with a small amount of computation and at high speed (Paragraph 0009 of Li). Claims 24-25 are rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of Lachaine et al. (PGPUB No. US 2009/0041323). Regarding claim 24, modified Gerard teaches the system in claim 1, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the final measurement is representative of an unacquired imaging plane of the patient's anatomy. In an analogous imaging field of endeavor, regarding anatomical feature tracking and ultrasound image processing, Lachaine teaches a system, wherein the final measurement is representative of an unacquired imaging plane of the patient's anatomy (Paragraphs 0012-13 allows for the interpolation of images between acquired images. The anatomical structure can be constructed with respect to the interpolation. See Fig. 2 and the interpolated plane being between the acquired plane). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Lachaine’s teaching of an assessment of an unacquired imaging plane. This modified apparatus would allow the user to improve reconstructed images and their accuracy (Paragraph 0011 of Lachaine). Furthermore, the modification addresses the need for a reconstruction that can solve the issue of slow processing time (Paragraph 0010 of Lachaine). Regarding claim 25, modified Gerard teaches the system in claim 24, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the unacquired imaging plane is located between the different imaging planes. In an analogous imaging field of endeavor, regarding anatomical feature tracking and ultrasound image processing, Lachaine teaches a system, wherein the unacquired imaging plane is located between the different imaging planes (Paragraphs 0012-13 allows for the interpolation of images between acquired images. The anatomical structure can be constructed with respect to the interpolation. See Fig. 2 and the interpolated plane being between the acquired plane). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Lachaine’s teaching of the plane being between acquire image planes. This modified apparatus would allow the user to improve reconstructed images and their accuracy (Paragraph 0011 of Lachaine). Furthermore, the modification addresses the need for a reconstruction that can solve the issue of slow processing time (Paragraph 0010 of Lachaine). Claim 26 is rejected under 35 U.S.C. 103 as being unpatentable over Gerard et al. (PGPUB No. US 2018/0322627) in view of Voigt et al. (PGPUB No. US 2019/0099159) further in view of Lachaine et al. (PGPUB No. US 2009/0041323) further in view of Heimdal et al. (PGPUB No. US 2006/0004291). Regarding claim 26, modified Gerard teaches the system in claim 24, as discussed above. However, the combination of Gerard and Voigt is silent regarding a system, wherein the unacquired imaging plane intersects one or more of the different imaging planes. In an analogous imaging field of endeavor, regarding anatomical feature tracking and ultrasound, Heimdal teaches a system, wherein the unacquired imaging plane intersects one or more of the different imaging planes (Paragraph 0026 teaches the interpolation between the frames. Fig. 4 shows the intersecting planes). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the combination of Gerard and Voigt with Heimdal’s teaching of an imaging plane that intersects the different imaging planes. This modified apparatus would allow the user to be able to implement surface model rendering techniques for the visualization of quantitative data (Paragraph 0005 of Heimdal). Furthermore, the modification allows for the imaging of the heart and production of a cine loop over a cardiac cycle (Paragraph 0026 of Heimdal). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Qazi et al. (PGPUB No. US 2006/0247544): Teaches the user input of a single image for a neural network to use for predictive measurements. Somphone et al. (PGPUB No. US 2019/0216439): Teaches the interpolation of data between image planes. Rai (PGPUB No. US 2017/0209125): Teaches the use of maker pairs. Petruzzelli et al. (PGPUB No. US 2013/0324850): Teaches the use of maker pairs. Chono (PGPUB No. US 2011/0313291): Teaches the interpolation of unacquired images between acquired images. Yamauchi et al. (PGPUB No. US 2002/0123688): Teaches the interpolation of unacquired images between acquired images. Kondo et al. (PGPUB No. US 2014/0176799): Teaches the interpolation of unacquired images between acquired images. Mansi et al. (PGPUB No. US 2012/0232386): Teaches the user input of a single image for a neural network to use for predictive measurements. Kobayashi et al. (PGPUB No. US 2017/0296149): Teaches the use of maker pairs. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADIL PARTAP S VIRK whose telephone number is (571)272-8569. The examiner can normally be reached Mon-Fri 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached on 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ADIL PARTAP S VIRK/Primary Examiner, Art Unit 3798 1 https://www.bmus.org/for-patients/history-of-ultrasound/ 2 https://www.rochester.edu/rcbu/education/courses.html 3 See Medical Imaging Technology by Victor I. Mikla and Victor V. Mikla. 4 See Medical Imaging Technology by Victor I. Mikla and Victor V. Mikla.
Read full office action

Prosecution Timeline

May 19, 2022
Application Filed
Aug 24, 2023
Non-Final Rejection — §101, §103, §112
Nov 29, 2023
Response Filed
Mar 11, 2024
Final Rejection — §101, §103, §112
Jun 13, 2024
Response after Non-Final Action
Jun 25, 2024
Request for Continued Examination
Jun 26, 2024
Response after Non-Final Action
Jun 28, 2024
Non-Final Rejection — §101, §103, §112
Oct 03, 2024
Response Filed
Nov 18, 2024
Final Rejection — §101, §103, §112
Jan 10, 2025
Interview Requested
Jan 16, 2025
Applicant Interview (Telephonic)
Jan 16, 2025
Examiner Interview Summary
Jan 22, 2025
Response after Non-Final Action
Feb 11, 2025
Request for Continued Examination
Feb 12, 2025
Response after Non-Final Action
Jun 10, 2025
Non-Final Rejection — §101, §103, §112
Sep 04, 2025
Examiner Interview Summary
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 12, 2025
Response Filed
Feb 14, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599313
Health Trackers for Autonomous Targeting of Tissue Sampling Sites
2y 5m to grant Granted Apr 14, 2026
Patent 12569221
Systems and Methods for Infrared-Enhanced Ultrasound Visualization
2y 5m to grant Granted Mar 10, 2026
Patent 12569228
ULTRASOUND DIAGNOSTIC APPARATUS AND CONTROL METHOD FOR ULTRASOUND DIAGNOSTIC APPARATUS
2y 5m to grant Granted Mar 10, 2026
Patent 12569304
OPTICAL COHERENCE TOMOGRAPHY GUIDED ROBOTIC OPHTHALMIC PROCEDURES
2y 5m to grant Granted Mar 10, 2026
Patent 12564384
SYSTEM AND METHODS FOR JOINT SCAN PARAMETER SELECTION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
48%
Grant Probability
89%
With Interview (+41.3%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 213 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month