DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has complied with all conditions for receiving the benefit of an earlier filing date of 23 Dec 2021 under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c).
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 14 Jun 2024 and 26 Mar 2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the IDS has been considered by the Examiner.
Specification
The disclosure is objected to because of the following informalities:
“The Methods session describes …” should read “The Methods section describes …” (pg. 4, line 21);
“all remaining eligible participants Some” should read “all remaining eligible participants. Some” (pg. 6, line 30); and
“which In one example” should read “which in one example” (pg. 26, line 3).
Appropriate correction is required.
Claim Objections
Claims 1, 5, 9-10, 13, 20-22, and 24 are objected to because of the following informalities:
“estimating gestational age” should read “estimating a gestational age” (preamble of claims 1 and 13);
“the trained machine learning mode” should read “the trained machine learning model” (claim 1);
“outputting the estimate of gestational age to a user” should read “outputting the estimate of the gestational age to a user” (claims 1, 13, and 24);
“estimating gestational age” should read “estimating the gestational age” (claims 5 and 17);
“the gestational age estimate” should read “the estimate of the gestational age” for consistency (claims 9-10 and 21-22);
“the feature extraction module for receiving … and producing … providing” should read “the feature extraction module for receiving … producing … and providing” (claim 13); and
“wherein the trained machine learning model includes …” should read “wherein the trained machine learning model further includes …” (claims 20-22).
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claims 1, 13, and 24 recite the limitations “producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data”, “the feature extraction module for receiving fetal ultrasound image data for at least one image of a human fetus, and producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data, providing the at least one feature vector as input to the attention module”, and “receiving, at a feature extraction module of a trained machine learning model, fetal ultrasound image data for at least one image of a human fetus, and producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data” respectively. Since these limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, “feature extraction module” in claims 1, 13, and 24 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification discloses that the claimed functions of “feature extraction module” are performed by at least a processor and a memory for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Pg. 23, line 17 – pg. 24, line 21: … “The trained machine learning model may be implemented on a computing platform including at least one processor and a memory. The trained machine learning model may, in one example, be implemented using computer executable instructions stored in the memory which cause the processor to perform steps that implement the trained machine learning module …”
Furthermore, a review of the specification discloses the following algorithm for the processor and memory implemented functions:
Pg. 15, line 28 – pg. 17, line 2: “Feature Extraction Module
During training, each frame is processed using ResNet-50 architecture initialized with weights trained on the ImageNet data set, this step yields a feature vector of size 2048 for each frame. The extracted features are then analyzed via our Weighted Average Attention (WAA) Module described in the following section. While a pre-trained network is used for feature extraction, the weights in ResNet-50 are fine-tuned together with the other parameters in the model during training.”
For purposes of the examination, the Examiner will interpret “feature extraction module” as a processor and memory configured to use ResNet-50 architecture initialized with weights trained on the ImageNet data set and yield a feature vector of size 2048 for each ultrasound image frame, as disclosed in pg. 15, line 28 – pg. 17, line 2 of the specification of the instant application, or equivalents thereof.
Claims 1, 13, and 24 recite the limitations “producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors”, “the attention module for producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors and providing the weighted sum vector as input to the gestational age prediction module”, and “producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors” respectively. Since these limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, “attention module” in claims 1, 13, and 24 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification discloses that the claimed functions of “attention module” are performed by at least a processor and a memory for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Pg. 23, line 17 – pg. 24, line 21: … “The trained machine learning model may be implemented on a computing platform including at least one processor and a memory. The trained machine learning model may, in one example, be implemented using computer executable instructions stored in the memory which cause the processor to perform steps that implement the trained machine learning module …”
Furthermore, a review of the specification discloses the following algorithm for the processor implemented functions:
Pg. 17, line 4 – pg. 18, line 2: “Weighted Average Attention Module
The functional form of our attention module is motivated by the additive Bahdanau attention. Our Weighted Average Attention (WAA) Module has 3 trainable parameters V, W, and Q as defined here: … (see the specification for the disclosed equations) where x.sub.t is the output features from the feature extraction module. The attention module comprises W, which is a linear dense layer that outputs a vector of size 64, followed by the hyperbolic tangent activation function and finally a dot product with V to map to a single scalar value Wt between zero and one for each frame, where the scalar value for wt is determined by the sigmoid function denoted by σ in Equation 1. Equation 2 computes a weighted score s.sub.t on the time dimension so that Σs=1.
The parameters of the weighted average attention module are jointly trained with the other parts of the model. The attention mechanism described above allows the model to focus on frames of the input sequence that contain fetal structures and maximize the gestational age prediction power. The output of this attention module is a weighted sum of the features from the input frames and computed as shown in Equation 3 where Q is another dense layer which reduces the dimension of the feature vector x.sub.t from 2048 to 128. The weighted sum computation allows arbitrary sequence length and enables our model to make predictions based on a single or multiple frames.
Finally, a single linear layer takes variable a in Equation 3 as input and outputs the gestational age estimate.”
For purposes of the examination, the Examiner will interpret “attention module” as a processor and memory configured to compute the algorithm disclosed in pg. 17, line 4 – pg. 18, line 2 of the specification of the instant application, or equivalents thereof.
Claims 1, 13, and 24 recite the limitations “providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus”, “the gestational age prediction module for generating, from the weighted sum vector, an estimate of the gestational age of the human fetus and outputting the estimate of gestational age to a user”, and “providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus” respectively. Since these limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, “gestational age prediction module” in claims 1, 13, and 24 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification discloses that the claimed functions of “gestational age prediction module” are performed by at least a processor and a memory for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Pg. 23, line 17 – pg. 24, line 21: … “The trained machine learning model may be implemented on a computing platform including at least one processor and a memory. The trained machine learning model may, in one example, be implemented using computer executable instructions stored in the memory which cause the processor to perform steps that implement the trained machine learning module …”
However, a review of the specification does not appear to disclose the computer-implemented algorithm for the claimed function and equivalents thereof. In particular, pg. 19, lines 18-19 of the specification merely disclose “Finally, a fully connected layer (P) estimates the gestational age.”; and pg. 25, line 33 – pg. 26, line 6 merely disclose “the gestational age production module comprises a linear prediction module weighted from the training, that takes the weighted sum vector as input, applies the weight(s), and produces an output, which In one example, indicates a gestational age in days.” These are not sufficient detail disclosing the algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function of the “gestational age prediction module”. See MPEP 2161.01.I. Accordingly, one skilled in the art would not know the algorithm (i.e., the necessary steps and/or flowcharts) for performing the claimed function of the ”gestational age prediction module”, and the metes and bounds of the claimed invention are unclear. For these reasons, applicant has failed to comply with the written description requirement as required by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph and to particularly point out and distinctly claim the invention as required by 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Claims 8 and 20 recite the limitations “producing, using the classification module, a class distribution vector for each of the feature vectors” and “a classification module for receiving the feature vectors output from the feature extraction module and producing a class distribution vector for each of the feature vectors” respectively. Since these limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, “classification module” in claims 8 and 20 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification discloses that the claimed functions of “classification module” are performed by at least a processor and a memory for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Pg. 23, line 17 – pg. 24, line 21: … “The trained machine learning model may be implemented on a computing platform including at least one processor and a memory. The trained machine learning model may, in one example, be implemented using computer executable instructions stored in the memory which cause the processor to perform steps that implement the trained machine learning module …”
Furthermore, a review of the specification discloses the following algorithm for the processor implemented functions:
Pg. 27, line 9 – pg. 29, line 29: “Classification Module
Image Labeling Via Contrastive Learning and kMeans
Our unsupervised clustering approach is inspired by SimCLR.sup.1. We generate image embeddings of dimension 128 and use kMeans.sup.2 to automatically find image clusters.
FIG. 10 shows the steps to create two embeddings of the same image under different transformations. We use the trained gestational age model (FIG. 4) to compute the image attention score. This scalar between [0, 1] is correlated to image quality for the computation of gestational age, however, the image content is unknown. In FIG. 10, a transformation T is applied twice to an input US frame (in this example, a fetal head). The transformed frames are projected to 128-dimension embeddings (Z.sub.0, Z.sub.1). The trained Attention module from the Gestational Age Prediction Model (FIG. 4) is used to produce an attention score for each frame. The embeddings and scores are used to compute the contrastive loss. (See the specification for the disclosed equations.)
FIG. 11 is a diagram illustrating a 128-dimension unit hypersphere. The embeddings Z.sub.0 and Z.sub.1 correspond to the same image under different transformations. While Z.sub.0r corresponds to a different image in the same batch. The loss function is composed of 3 terms: similarity, contrastive, and north. The similarity term uses the cosine similarity and aims to minimize the angle between embeddings of the same image under 2 different transformations. The cosine similarity is equal to 1 if the vectors are colinear, 0 if the vectors are orthogonal, and −1 if they point in opposite directions. The image transformations include random intensity (color jitter) and random rotation or random resize crop.
The contrastive term starts by randomizing z.sub.0.fwdarw.z.sub.0r images in the batch (the batch size during training is 256) and computes the cosine similarity against z.sub.1. The resulting values are then sorted in ascending order, i.e., the most orthogonal/different will be first. Finally, we scale the resulting values using W(1−x).sup.2 where W is a hyperparameter set to 16. The loss function encourages different images to be far apart in the embedding space while similar images to be closer.
Finally, the north term uses the score scalar s given by the GA model to further organize the embedding space. It pushes images that have important fetal anatomy to be closer to the equator of the hypersphere while images without meaningful information to be closer to the north pole, i.e., the vector of zeros with 1 in the final dimension.
The data split is the same as the splits used for training the gestational age prediction model.
The contrastive model is trained for 177 epochs, we use the AdamW optimizer with learning rate 1e.sup.−4 and use the early stopping criteria with patience 30. After the training is done, we use kMeans to automatically find images clusters. The image clusters are then analyzed by expert sonographers and labels are assigned to each cluster. The unsupervised clustering method identifies fetal structures of different quality notably head images with varying quality.
Classification
The labels produced by the unsupervised method described above are used to train a classifier. We use 35 classes and the cross-entropy loss for a multiclass classification problem. The data used for the classification task uses the test split only. The test split is further subdivided following a 0.7, 0.1, and 0.2 splits for training, validation, and testing. The fetal structures include head, abdomen, femur, placenta, gestational sac, low/high quality head images. FIG. 12 shows the normalized confusion matrix and Table 7 the classification report with an average f1-score of 0.8 for the classification task.
With this image classification model, we provide the possibility to automatically assign labels to the blind sweep videos used by the gestational age prediction model. This module provides the possibility to display clinically relevant frames to the experts, while hiding frames that are not meaningful for an expert.”
For purposes of the examination, the Examiner will interpret “classification module” as a processor and memory configured to compute the algorithm disclosed in pg. 27, line 9 – pg. 29, line 29 of the specification of the instant application, or equivalents thereof.
Claims 21-22 recite the limitations “an error prediction module for using the class distribution vector and the attention score for each feature vector to output a score indicative of a quality of the gestational age estimate” and “an error prediction module for using the feature vector to generate an estimate of uncertainty for the gestational age estimate” respectively. Since these limitations invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, “error prediction module” in claims 21-22 have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification discloses that the claimed functions of “error prediction module” are performed by at least a processor and a memory for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation:
Pg. 23, line 17 – pg. 24, line 21: … “The trained machine learning model may be implemented on a computing platform including at least one processor and a memory. The trained machine learning model may, in one example, be implemented using computer executable instructions stored in the memory which cause the processor to perform steps that implement the trained machine learning module …”
Furthermore, a review of the specification discloses the following algorithm for the processor implemented functions:
Pg. 30, line 3 – pg. 31, line 18: “Error Prediction Module
Estimate Uncertainty Quantification
This module functions both as a quality control mechanism for the blind sweeps collected by the user and a user feedback mechanism. It takes as inputs the attention scores from the attention layer (GA model), the class distribution vector from the classification module, feature vectors from GA model and/or unsupervised clustering model, as well as the gestational age estimate (GA model). The module combines these inputs to assess the quality of information gathered from a series of ultrasound images.
This module performs two quality assessments tasks. First, it uses the attention scores and a thresholding operation to determine that a certain number of frames in the blind sweeps meet a minimum quality criterion to calculate the gestational age. Second, using the class distribution vector, it evaluates the quality (for gestational age prediction) of information collected in a series of ultrasound images. The module uses this quality assessment to construct an individualized prediction interval for the gestational age estimate produced by gestational age prediction module. The output of this module may include, for example, 1. a confidence score; 2. a prediction interval for the gestational age estimate; 3. an accept or reject decision by the module. Based on these outputs, the user may be prompted to repeat the blind sweeps collection procedure.”
For purposes of the examination, the Examiner will interpret “error prediction module” as a processor and memory configured to compute the algorithm disclosed in pg. 30, line 3 – pg. 31, line 18 of the specification of the instant application, or equivalents thereof.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-11 and 13-24 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites the limitation “providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus”. As noted above in the Claim Interpretation, a review of the specification of the instant application does not disclose an algorithm for this computer-implemented limitation. In particular, pg. 19, lines 18-19 of the specification merely disclose “Finally, a fully connected layer (P) estimates the gestational age.”; and pg. 25, line 33 – pg. 26, line 6 merely disclose “the gestational age production module comprises a linear prediction module weighted from the training, that takes the weighted sum vector as input, applies the weight(s), and produces an output, which in one example, indicates a gestational age in days.” These are not sufficient detail disclosing an algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function of the “gestational age prediction module”. See MPEP 2161.01.I. Accordingly, one skilled in the art would not know the algorithm (i.e., the necessary steps and/or flowcharts) for performing the claimed function of the” gestational age prediction module” in view of the specification of the instant application. Claims 2-11 inherit the deficiency by the nature of their dependency on claim 1.
Claim 13 recites the limitation “the gestational age prediction module for generating, from the weighted sum vector, an estimate of the gestational age of the human fetus and outputting the estimate of gestational age to a user”. As noted above in the Claim Interpretation, a review of the specification of the instant application does not disclose an algorithm for this computer-implemented limitation. In particular, pg. 19, lines 18-19 of the specification merely disclose “Finally, a fully connected layer (P) estimates the gestational age.”; and pg. 25, line 33 – pg. 26, line 6 merely disclose “the gestational age production module comprises a linear prediction module weighted from the training, that takes the weighted sum vector as input, applies the weight(s), and produces an output, which In one example, indicates a gestational age in days.” These are not sufficient detail disclosing an algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function of the “gestational age prediction module”. See MPEP 2161.01.I. Accordingly, one skilled in the art would not know the algorithm (i.e., the necessary steps and/or flowcharts) for performing the claimed function of the ”gestational age prediction module” in view of the specification of the instant application. Claims 14-23 inherit the deficiency by the nature of their dependency on claim 13.
Claim 24 recites the limitation “providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus”. As noted above in the Claim Interpretation, a review of the specification of the instant application does not disclose an algorithm for this computer-implemented limitation. In particular, pg. 19, lines 18-19 of the specification merely disclose “Finally, a fully connected layer (P) estimates the gestational age.”; and pg. 25, line 33 – pg. 26, line 6 merely disclose “the gestational age production module comprises a linear prediction module weighted from the training, that takes the weighted sum vector as input, applies the weight(s), and produces an output, which In one example, indicates a gestational age in days.” These are not sufficient detail disclosing an algorithm (e.g., the necessary steps and/or flowcharts) that perform the claimed function of the “gestational age prediction module”. See MPEP 2161.01.I. Accordingly, one skilled in the art would not know the algorithm (i.e., the necessary steps and/or flowcharts) for performing the claimed function of the ”gestational age prediction module” in view of the specification of the instant application.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-11 and 13-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation “receiving, at a feature extraction module of a trained machine learning model, fetal ultrasound image data for at least one image of a human fetus”. It is unclear whether “a trained machine learning model” and “a human fetus” recited in the limitation are the same or different from “a trained machine learning model” and “a human fetus” recited in the preamble of the claim, respectively. Claims 2-11 inherit the deficiency by the nature of their dependency on claim 1. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “receiving, at a feature extraction module of the trained machine learning model, fetal ultrasound image data for at least one image of the human fetus”
Claim 1 recites the limitation “providing the at least one feature vector as input to an attention module of the trained machine learning model”. It is unclear whether “an attention module” recited in the limitation is the same or different from “an attention function” recited in the preamble of the claim. Claims 2-11 inherit the deficiency by the nature of their dependency on claim 1. For purposes of the examination, a broadest reasonable interpretation is being given to “an attention function” in the preamble to be the same as “an attention module” recited in the limitation.
Claim 1 recites the limitations “producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data” and “providing the at least one feature vector as input to an attention module of the trained machine learning model and producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors”. The limitations recite producing and providing “at least one feature vector” to the attention module, yet the limitations also recite propagating “the feature vectors” through the attention module and a weighted sum vector is produced by aggregating and weighting “the feature vectors”. Thus, it is unclear whether more than one feature vector must produced and provided to the attention module or not. Claims 2-11 inherit the deficiency by the nature of their dependency on claim 1. For purposes of the examination, a broadest reasonable interpretation is being given to “at least one feature vector” that is produced and provided to the attention module as “at least two feature vectors”, and any further recitation of “at least one feature vector” in the dependent claims to be also “at least two feature vectors”. Additionally, “the feature vectors” recited in claim 1 and its dependent claims is being given a broadest reasonable interpretation as “the at least two feature vectors” for clarity.
Claim 1 recites the limitation “providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus”. As noted above in the Claim Interpretation and the 35 U.S.C. 112(a) rejection, a review of the specification of the instant application does not disclose an algorithm for this computer-implemented limitation. Therefore, the metes and bounds of the claim are unclear in view of this computer-implemented limitation. Claims 2-11 inherit the deficiency by the nature of their dependency on claim 1.
Claim 5 recites the limitation “wherein the weighted average attention module weights the at least one feature vector based on relative importance of the features in the at least one feature vector in estimating gestational age”. First, the term “relative importance” is a relative term which renders the claim indefinite. The term “relative importance” or even “importance” alone is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Second, the antecedent basis for “the features” in the limitations is unclear. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “wherein the weighted average attention module weights the at least two feature vectors based on features in the at least two feature vectors in estimating the gestational age”.
Claim 10 recites the limitation “using the feature vector to generate an estimate of uncertainty for the gestational age estimate”. The antecedent basis for “the feature vector” is unclear. It is unclear whether “the feature vector” is referring to: a) one of “the feature vectors” recited in claim 1, to which claim 10 depends; b) a specific one of “the feature vectors” recited in claim 1; c) all of “the feature vectors”; or d) otherwise. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “using the at least two feature vectors to generate an estimate of uncertainty for the gestational age estimate”.
Claim 11 recites the limitation “selecting, using the class distribution vectors, clinically relevant images from the image frames”. The term “clinical relevant” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “selecting, using the class distribution vectors, representative image frames from the image frames”.
Claim 13 recites the limitation “a trained machine learning module implemented using the at least one processor, the trained machine learning module including a feature extraction module, an attention module, and a gestational age prediction module”. First, it is unclear whether “a trained machine learning model” recited in the limitation is the same or different from “a trained machine learning model” recited in the preamble of the claim. Second, it is unclear whether “an attention module” recited in the limitation is the same or different from “an attention function” recited in the preamble of the claim. Claims 14-23 inherit the deficiency by the nature of their dependency on claim 13. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “the trained machine learning module implemented using the at least one processor, the trained machine learning module including a feature extraction module, an attention module, and a gestational age prediction module”. Additionally, a broadest reasonable interpretation is being given to “an attention function” in the preamble to be the same as “an attention module” recited in the limitation.
Claim 13 recites the limitations “producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data, providing the at least one feature vector as input to the attention module” and “the attention module for producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors”. The limitations recite producing and providing “at least one feature vector” to the attention module, yet the limitations also recite propagating “the feature vectors” through the attention module and a weighted sum vector is produced by aggregating and weighting “the feature vectors”. Thus, it is unclear whether more than one feature vector must produced and provided to the attention module or not. Claims 14-23 inherit the deficiency by the nature of their dependency on claim 13. For purposes of the examination, a broadest reasonable interpretation is being given to “at least one feature vector” that is produced and provided to the attention module as “at least two feature vectors”, and any further recitation of “at least one feature vector” in the dependent claims to be also “at least two feature vectors”. Additionally, “the feature vectors” recited in claim 13 and its dependent claims is being given a broadest reasonable interpretation as “the at least two feature vectors” for clarity.
Claim 13 recites the limitation “the gestational age prediction module for generating, from the weighted sum vector, an estimate of the gestational age of the human fetus and outputting the estimate of gestational age to a user”. As noted above in the Claim Interpretation and the 35 U.S.C. 112(a) rejection, a review of the specification of the instant application does not disclose an algorithm for this computer-implemented limitation. Therefore, the metes and bounds of the claim are unclear in view of this computer-implemented limitation. Claims 14-23 inherit the deficiency by the nature of their dependency on claim 13.
Claim 17 recites the limitation “wherein the weighted average attention module weights the at least one feature vector based on relative importance of the features in the at least one feature vector in estimating gestational age”. First, the term “relative importance” is a relative term which renders the claim indefinite. The term “relative importance” or even “importance” alone is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Second, the antecedent basis for “the features” in the limitations is unclear. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “wherein the weighted average attention module weights the at least two feature vectors based on features in the at least two feature vectors in estimating the gestational age”.
Claim 22 recites the limitation “wherein the trained machine learning model includes an error prediction module for using the feature vector to generate an estimate of uncertainty for the gestational age estimate”. The antecedent basis for “the feature vector” is unclear. It is unclear whether “the feature vector” is referring to: a) one of “the feature vectors” recited in claim 13, to which claim 22 depends; b) a specific one of “the feature vectors” recited in claim 13; c) all of “the feature vectors”; or d) otherwise. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “wherein the trained machine learning model includes an error prediction module for using the at least two feature vectors to generate an estimate of uncertainty for the gestational age estimate”.
Claim 23 recites the limitation “wherein the classification module is configured to select, using the class distribution vectors, clinically relevant images from the image frames”. The term “clinical relevant” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For purposes of the examination, the limitation is being given a broadest reasonable interpretation as “wherein the classification module is configured to select, using the class distribution vectors, representative images from the image frames”.
Claim 24 recites the limitations “producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data” and “providing the at least one feature vector as input to an attention module of the trained machine learning model and producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors”. The limitations recite producing and providing “at least one feature vector” to the attention module, yet the limitations also recite propagating “the feature vectors” through the attention module and a weighted sum vector is produced by aggregating and weighting “the feature vectors”. Thus, it is unclear whether more than one feature vector must be produced and provided to the attention module or not. For purposes of the examination, a broadest reasonable interpretation is being given to “at least one feature vector” that is produced and provided to the attention module as “at least two feature vectors”, and any further recitation of “at least one feature vector” in the dependent claims to be also “at least two feature vectors”. Additionally, “the feature vectors” recited in claim 24 is being given a broadest reasonable interpretation as “the at least two feature vectors” for clarity.
Claim 24 recites the limitation “providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus”. As noted above in the Claim Interpretation and the 35 U.S.C. 112(a) rejection, a review of the specification of the instant application does not disclose an algorithm for this computer-implemented limitation. Therefore, the metes and bounds of the claim are unclear in view of this computer-implemented limitation.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter, because claims 1-11 recite at least “A method for estimating gestational age of a human fetus using a trained machine learning model with an attention function” and claims 13-23 recite at least “a trained machine learning module implemented using the at least one processor, the trained machine learning module including a feature extraction module, an attention module, and a gestational age prediction module”
Claim 1 recites “A method for estimating gestational age of a human fetus using a trained machine learning model with an attention function, the method comprising: …” While the claim is directed to a “method”, the method is recited to be performed “using a trained machine learning model”. Further, every functional limitation in the claim is recited to be performed by a module of the trained machine learning model: even the step of “outputting the estimate of gestational age to a user” is implied to be performed by the trained machine learning model, since the estimate is explicitly recited to be generated by the trained machine learning model. Therefore, claim 1 is directed to “software per se”, which is directed to a non-statutory subject matter. See MPEP 2106.03.I.: “software expressed as code or a set of instructions detached from any medium is an idea without physical embodiment. See Microsoft Corp. v. AT&T Corp., 550 U.S. 437, 449, 82 USPQ2d 1400, 1407 (2007); see also Benson, 409 U.S. 67, 175 USPQ2d 675 (An "idea" is not patent eligible)”. Claims 2-11 inherit the deficiency by the nature of their dependency on claim 1, and none of the dependent claims recites any physical medium in which the model is stored (i.e., a non-transitory storage medium) or performed by (i.e., a processor).
Allowable Subject Matter
39. Claims 1-24 would be allowable if rewritten or amended to overcome the rejections under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), 1st paragraph and 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action, including being consistent with the Claim Interpretations noted above under 35 U.S.C. 112(f).
40. The following is a statement of reasons for the indication of allowable subject matter:
When the claims are considered as a whole, including the Claim Interpretation under 35 U.S.C. 112(f) noted above, prior arts do not disclose at least receiving, at a feature extraction module of a trained machine learning model, fetal ultrasound image data for at least one image of a human fetus, and producing, by propagating the ultrasound image data through the feature extraction module, at least one feature vector from the ultrasound image data; providing the at least one feature vector as input to an attention module of the trained machine learning model and producing, by propagating the feature vectors through the attention module, a weighted sum vector that aggregates and weights the feature vectors; providing the weighted sum vector as input to a gestational age prediction module of the trained machine learning mode, which generates, from the weighted sum vector, an estimate of the gestational age of the human fetus; and outputting the estimate of gestational age to a user by the algorithms noted above in the Claim Interpretation. In particular, Li et al. (Li et al. Comparison of Different Machine Learning Approaches to Predict Small for Gestational Age Infants. IEEE Transactions on Big Data, vol. 6, no. 2, pp. 334-346, 1 June 2020, doi: 10.1109/TBDATA.2016.2620981.), a prior art being made of record herein, discloses estimating a gestational age using a trained machine learning model with image classification (see 2. Machine Learning Classification Techniques and 3. Methods). However, Li et al. does not disclose estimating the gestational age by at least producing a feature vector and a weighted sum vector using the algorithms noted above in the Claim Interpretation. Additionally, Papageorghiou et al. (Papageorghiou et al. Ultrasound-based gestational-age estimation in late pregnancy. Ultrasound Obstet Gynecol 2016; 48: 719-726. DOI: 10.1002/uog.15894.) discloses estimating a gestational age using a trained machine learning model with ultrasound image-based anatomical measurements (see SUBJECTS AND METHODS); Fung et al. (Fung et al. Achieving accurate estimates of fetal gestational age and personalised predictions of fetal growth based on data from an international prospective cohort study: a population-based machine learning study. The Lancet Digital Health (2020), 2, e368-e375. DOI: 10.1016/S2589-7500(20)30131-X.) discloses estimating a gestational age using a trained machine learning model with ultrasound image-based anatomical features (see Methods and Appendix); Gomes et al. (US PG Pub No. 2022/0354466, a priority date of 27 Sep 2019) discloses estimating a gestational age using a trained machine learning model with ultrasound image-based head, abdominal and femur length measurements (see Fig. 9 and [0069]-[0070]); Raynaud et al. (US PG Pub No. 2025/0032088, a priority date of 09 Dec 2021) discloses estimating a gestational age using a trained machine learning model with ultrasound image-based anatomical measurements (see Fig. 1-3 and [0095]-[0099]); and Sleep et al. (US PG Pub No. 2024/0032890, not a prior art but a pertinent art) discloses estimating a gestational age using a machine learning model trained with ultrasound images (see at least Fig. 3-4 and [0084]-[0087]). However, Papageorghiou et al, Fung et al, Gomes et al., Raynaud et al., and Sleep et al. disclose estimating a gestational age by quantifying anatomical feature measurements, not by at least producing a feature vector and a weighted sum vector using the algorithms noted above in the Claim Interpretation.
The technical advantage of the claimed invention is to provide “a machine learning algorithm that can receive as input ultrasound image data, including ultrasound image data collected by a non-expert, and that can accurately estimate gestational age of a human subject” (pg. 2, lines 11-15 of the specification of the instant application).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Younhee Choi whose telephone number is (571)272-7013. The examiner can normally be reached M-F 9AM-5PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan Nguyen can be reached at 571-272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Y.C./Examiner, Art Unit 3797
/ANH TUAN T NGUYEN/Supervisory Patent Examiner, Art Unit 3795
04/06/2026