DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments and amendments, in the Amendment filed November 24, 2025 (herein “Amendment”) with respect to the objection to claims 1, 6 and 14 have been fully considered and are persuasive. The objections to claims 1, 6 and 14 have been withdrawn.
Applicant's arguments filed in the Amendment regarding the rejection of claims 1–25 under 35 U.S.C. 101 as being directed towards an abstract idea without a practical application or significantly more have been fully considered but they are not persuasive.
First Applicant sets forth on pages 13–14 (regarding practical application) and again on page 15 (regarding significantly more) of the Amendment that the technical improvement as a practical application or significantly more is that the trained difference model only takes an input image set, with only one kind of image, to predict difference parameters. However, the claim does not say “only” and in any event, the ability for the trained difference model to predict difference parameters is the result of the details of the machine learning itself that is able to accomplish the prediction difference parameter predictions, not simply that only an image set need be input. Applicant concludes on page 15 that the model is “unconventional” and “provides advantageous technical effects” but fails to set forth the technical steps/operations/structure that makes the broadly claimed “first machine learning model” unconventional.
Applicant next argues against the finding that the first training image generation is by steps that are no more than a mental process in that “what effects the characteristics of a trained model is the properties of the training data, and the way they are used during training, but not the steps of preparing the training data.” However, the claim limitations at issue are the steps of preparing (generating) the training data. Besides, as noted in the Office Action issued June 23, 2025, and maintained herein, regardless of how the trained model is affected by the input training data, the steps of generating the training data are practically performed by the human mind, and therefore are abstract ideas. As provided by MPEP 2106.05(a): “It is important to note, the judicial exception alone cannot provide the improvement.”
Applicant further argues on pages 14–15 that the additional element of “analysis software” is a practical application. However, this limitation is recited with such a high level of generality, that is, no technical detail on the analysis steps performed by the analysis software that it amounts to nothing more than an instruction to apply the abstract idea using a generic computer, and thus do not render the abstract idea of “generating … the software-tracked contour” eligible. See MPEP 2106.05(f).
On pages 16–17, Applicant remarked specifically regarding claim 6 that claim 6 is “directed to technical improvement of quality control for software-generated results” and not towards an abstract idea. However, as set forth in MPEP 2106.04(d)(1), in order for the claim to recite the practical application, in realizing the disclosed technical improvement, the claim has to include the components or steps of the invention that provide the improvement described in the specification. Simply being generally “directed to” the technical improvement does not meet this requirement for finding a practical application. Further, Applicant repeats their arguments regarding the steps for generating training data not being mental processes, and as already mentioned above, the claimed generating training data steps are practically performed in the human mind and therefore are abstract ideas as mental processes. Applicant then again on page 16 repeats the “analysis software” is a practical application argument. See the response provided above regarding the high level of generality recited by “analysis software,” thus precluding this limitation from being a practical application. Applicant concludes on page 17 that the model is “provides advantageous technical effects” but fails to set forth the technical steps/operations/structure that makes the broadly claimed “evaluation model” realize any “advantageous technical effects,” disclosed in the Specification.
On pages 17–18 of the Amendment, Applicant next argues that regarding claims 18 and 25, that “generating at least one set of geometric parameters from the at least one corresponding software-analyzed image” cannot be practically performed in the human mind without the information of software-analyzed image. However, simply because software is analyzing an image, it does not mean the analysis would be unintelligible by a human. The claims are recited with such breadth, that any analysis so long as it is performed by software would constitute “software-analyzed image.” To be sure, software often performs the same kinds of analysis that humans easily perform in their minds: facial recognition for example. Therefore, to the extend the “information” of the software-analyzed image is not defined in the claim, the breadth permits a finding that its analysis capable of being practically performed by the human mind. Applicant argues that the human cannot “learn the sophisticated correlations between the parameters in the model,” and yet same “sophisticated correlations” are not presently claimed. Applicant also argues that “human evaluation” by pen and paper is time-consuming – however, the type of analysis being performed is not distinguished in the claims by way of specific limitations to realize any “machine based” efficiencies such as Applicant suggests.
Applicant next argues on pages 18–19 against the finding of the “receiving at least one input image and at least one corresponding software-analyzed image” as being insignificant extra-solution activity, however, Applicant does not argue why these steps are not extra-solutional much less insignificant. The examiner notes that the invention is not directed towards a novel way of receiving data for a machine learning model. Rather, the receiving step is extra-solutional in that it is a step that must be performed before the invention based steps can take place. Hence, it is insignificant extra solution activity. See MPEP 2106.05(g).
Lastly, Applicant argues that the elements “analysis software” “difference model” and “evaluation model” are sufficient to recite a practical application, however, as noted previously, these limitations are recited with such a high level of generality, that is, no technical detail on the analysis steps performed by the analysis software, or the difference model, or the evaluation mode, that they amount to nothing more than an instruction to apply the abstract idea using a generic computer, and thus do not render the abstract ideas recited elsewhere in the claim eligible. See MPEP 2106.05(f).
Accordingly, in view of the above, while all of Applicant’s arguments regarding the subject matter eligibility of the claims have been fully considered, they are not found persuasive and the rejection of the claims under 35 U.S.C. 101 is maintained.
Applicant's arguments filed in the Amendment regarding the rejection of claims 1–14 and 17–25 under 35 U.S.C. 103 have been fully considered but they are not persuasive.
Applicant argues first regarding claim 1 on pages 20–21 against the combination of Sirjani and Cheng in concluding that the combination would be “impossible,” and purports to support their finding of “impossibility” of the combination based on distinguishing aspects between Cheng and the present application. However, Applicant’s remarks fail to address the actual combination of record, that is, modifying Sirjani with Cheng’s receiving of difference value sets to train on. Further, how any unrelied upon or uncited to portions of Cheng distinguish over the present application is irrelevant to the rationale to combine the cited references together. To the extent Applicant wished to distinguish the relied upon teachings of Cheng from the present invention, what controls the obvious analysis is what is actually claimed, not any “approaches” that may be in the Specification, but aren’t actually claimed.
Applicant next argues regarding claim 6 on page 21 generally that the cited reference Sirjani does not teach the present application, without any specific citations to claim language.
On page 22, applicant argues that Sirjani does not teach the preamble intended result claim language of “A method of training an evaluation model to generate predicted evaluation errors related to the differences between software-generated analysis result and adjusted analysis result, comprising:” because: Sirjani’s EchoRCNN is not trained “by combining outputs of both the reference stream with the user annotated/adjusted images, and the main stream with the current image of a video sequence with a previously predicted mask.” However, the preamble language does not detail how the evaluation model is trained, and in any event, merely requires that the evaluation model has an intended result of predicted evaluation errors related to the differences between software-generated analysis result. Sirjani was cited to and teaches setting forth EchoRCNN’s regression network which predicts offsets (generate predicted evaluation errors) related to a software based analysis of an image versus a ground truth from a manually refined border (adjusted). Therefore, to the extent the preamble’s intended use limitation has patentable weight, Sirjani does teach the claimed “to generate predicted evaluation errors related to the differences between software-generated analysis result and adjusted analysis result.”
Applicant further argues “Besides, the three features cited by the Examiner do not correlate with each other in an input-output manner,” but here, Applicant fails to point out the specific claim language requiring an interpretation of which and in what order, limitations that are requiring some kind of “input-output” manner.
Applicant argues on page 23 that claim 18 is nonobvious in view of their arguments earlier presented regarding claims 1 and 6, which claim 18 also recites. Therefore, Applicant is directed to the response set forth above regarding claims 1 and 6.
Lastly, Applicant argues on page 23 that claim 25 is nonobvious because it includes the method of claim 16, which is nonobvious over Sirjani. The Examiner understands that the Applicant intended to state that claim 25 includes the method of claim 6, not 16. Therefore, Applicant is directed to the response set forth above regarding claim 6.
Accordingly, while all of Applicants arguments and amendments have been fully considered, they are not persuasive, and the rejection against claims 1–14, and 17–25 under 35 U.S.C. 103 is herein maintained.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1–25 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without a practical application or significantly more.
Regarding claim 1, this claim recites the following limitations which are found to be abstract ideas not reciting a practical application or significantly more:
A method of training a difference model to generate difference parameters related to the differences between a software-tracked contour and an adjusted contour, comprising: training a first machine learning model with multiple first training data sets, each of the multiple first training data sets comprises a first training image set as the input for training, and a difference parameter set as the target for training (abstract idea as mathematical concepts per MPEP §2106.04(a)(2)(I), because training a machine learning model is a mathematical concept),
wherein the first training image set and the difference parameter set are generated by the steps of: (a) obtaining the first training image set by selecting at least one image (abstract idea as a mental process as a human mind is capable of selecting images to obtain them per MPEP §2106.04(a)(2)(III));
(b) generating, by an analysis software, the software-tracked contour based on the first training image set (abstract idea as a mental process as a human mind is capable of determining a contour from images or video per MPEP §2106.04(a)(2)(III));
(c) obtaining the adjusted contour (abstract idea as a mental process as a human mind is capable of adjusting a contour using pen and paper per MPEP §2106.04(a)(2)(III)); and
(d) obtaining the difference parameter set based on the software-tracked contour and the adjusted contour (abstract idea as a mental process as a human mind is capable of determining differences in parameter values of different contours using pen and paper per MPEP §2106.04(a)(2)(III)).
Claim 1 further recites additional elements: “first machine learning model” and “analysis software,” however these additional elements are not sufficient to recite a practical application of the abstract ideas recited as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer. see MPEP §2106.05(f).
Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Regarding claim 2, each image in the first training image set being an echocardiographic image is simply an extension of the abstract ideas recited in claim 1, without being a practical application or significantly more.
Regarding claim 3, the limitations in this claim are simply an extension of the abstract ideas recited in claim 1, without being a practical application or significantly more.
Regarding claims 4–5, the claimed specific model types further limiting the claimed first machine learning model are not sufficient to recite a practical application of the abstract ideas recited as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer, and also are not significantly more because they perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP §§ 2106.05(d) and (f).
Regarding claim 6, this claim recites the following limitations which are found to be abstract ideas not reciting a practical application or significantly more:
A method of training an evaluation model to generate predicted evaluation errors related to the differences between software-generated analysis result and adjusted analysis result, comprising: training a second machine learning model with multiple second training data sets, each of the multiple second training data sets comprises at least one difference parameter set as inputs for training, at least one geometric parameter set as inputs for training, and an evaluation result as target for training (abstract idea as mathematical concepts per MPEP §2106.04(a)(2)(I), because training a machine learning model is a mathematical concept),
wherein: each of the at least one difference parameter set indicates the differences between a software-tracked contour and an adjusted contour (continuation of the abstract idea of training since this limitation only further limits the data involved in the training, and does not provide additional method limitations);
each of the at least one geometric parameter set is calculated based on the software-tracked contour generated by an analysis software (abstract idea as mathematical concepts per MPEP §2106.04(a)(2)(I)); and
the evaluation result is determined based on the differences between a software-generated analysis result and an adjusted analysis result (abstract idea as a mental process as a human mind is capable of evaluating differences in results at least by using pen and paper per MPEP §2106.04(a)(2)(III)).
Claim 6 further recites additional elements: “evaluation model,” “second machine learning model” and “analysis software,” however these additional elements are not sufficient to recite a practical application of the abstract ideas recited as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer. see MPEP §2106.05(f).
Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Regarding claims 7–9, the claimed specific model types further limiting the claimed first machine learning model are not sufficient to recite a practical application of the abstract ideas recited as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer, and also are not significantly more because they perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP §§ 2106.05(d) and (f). The additional details regarding the evaluation result recited in claims 8–9 are simply extensions of the abstract ideas recited in claim 6, without being a practical application or significantly more.
Regarding claim 10, the claimed “(a) generating, by the analysis software, a software-tracked contour from at least one second training image” is an abstract idea as a mental process as the human mind is capable of generating a contour from an image using a pen and paper. Further, the claimed “and (b) calculating one of the at least one geometric parameter sets based on the software-tracked contour” is an abstract idea as a mathematical calculation.
Regarding claim 11, the second training image being an echocardiographic image is simply an extension of the abstract ideas recited in claim 6, without being a practical application or significantly more.
Regarding claim 12, the claimed “(a) obtaining a second training image set comprising at least one image” is an abstract idea as a mental process as a human mind is capable of selecting images to obtain them per MPEP §2106.04(a)(2)(III). Further, the claimed “(b) generating, by a difference model, one of the at least one difference parameter set based on the second training image set” is an abstract idea as a mental process as a human mind is capable of determining differences in parameter values using pen and paper per MPEP §2106.04(a)(2)(III).
Regarding claim 13, this claim recites the following limitations which are found to be abstract ideas not reciting a practical application or significantly more: (a) generating, by the analysis software, the software-tracked contour from at least one image; (abstract idea as a mental process as a human mind is capable of determining a contour from images or video per MPEP §2106.04(a)(2)(III));
(b) obtaining an adjusted contour (abstract idea as a mental process as a human mind is capable of adjusting a contour using pen and paper per MPEP §2106.04(a)(2)(III)); and
(c) calculating one of the at least one difference parameter set based on the software-tracked contour and the adjusted contour (abstract idea as a mental process as a human mind is capable of determining differences in parameter values of different contours using pen and paper per MPEP §2106.04(a)(2)(III)).
Regarding claim 14, this claim recites the following limitations which are found to be abstract ideas not reciting a practical application or significantly more: wherein the at least one geometric parameter set comprises an ED (end-diastolic) geometric parameter set and an ES (end-systolic) geometric parameter set; and the ED geometric parameter set and the ES geometric parameter set are generated by the steps of: (a) obtaining an ED training image set by selecting at least one ED image; (b) obtaining an ES training image set by selecting at least one ES image (abstract idea as a mental process as a human mind is capable of selecting images to obtain them per MPEP §2106.04(a)(2)(III)); (c) generating, by the analysis software, a tracked ED contour based on the selected at least one ED image; (d) generating, by the analysis software, a tracked ES contour based on the selected at least one ES image (abstract idea as a mental process as a human mind is capable of determining a contour from images or video per MPEP §2106.04(a)(2)(III)); (e) calculating the ED geometric parameter set based on the tracked ED contour; and (f) calculating the ES geometric parameter set based on the tracked ES contour (abstract idea as mathematical concepts per MPEP §2106.04(a)(2)(I)).
Regarding claim 15, the at least one difference parameter set comprises an ED (end-diastolic) difference parameter set and an ES (end-systolic) difference parameter set is simply an extension of the abstract ideas recited in claim 6, without being a practical application or significantly more. Further, the claimed generating … the ED difference parameter set, and generating … the ES difference parameter set are directed towards abstract ideas as a mathematical concepts per MPEP §2106.04(a)(2)(I). Claim 15 further recites additional elements: “an ED difference model,” and “an ES difference model,” however these additional elements are not sufficient to recite a practical application of the abstract ideas recited as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer. see MPEP §2106.05(f).
Further, claim 15 does not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Regarding claim 16, the at least one difference parameter set comprises an ED (end-diastolic) difference parameter set and an ES (end-systolic) difference parameter set is simply an extension of the abstract ideas recited in claim 6, without being a practical application or significantly more. Further, this claim recites the following limitations which are found to be abstract ideas not reciting a practical application or significantly more: (a) obtaining an adjusted ED contour and an adjusted ES contour (abstract idea as a mental process as a human mind is capable of adjusting a contour using pen and paper per MPEP §2106.04(a)(2)(III)); (b) calculating the ED difference parameter set based on the tracked ED contour and the adjusted ED contour; and (c) calculating the ES difference parameter set based on the tracked ES contour and the adjusted ES contour (abstract ideas as mathematical concepts per MPEP §2106.04(a)(2)(I)).
Regarding claim 17, this claim recites the following limitations which are found to be abstract ideas not reciting a practical application or significantly more: (a) calculating a software-generated analysis result based on the tracked ED contour and the tracked ES contour (abstract idea as mathematical concepts per MPEP §2106.04(a)(2)(I)); (b) obtaining an adjusted ED contour and an adjusted ES contour (abstract idea as a mental process as a human mind is capable of adjusting a contour using pen and paper per MPEP §2106.04(a)(2)(III)); (c) calculating an adjusted analysis result based on the adjusted ED contour and the adjusted ES contour (abstract idea as mathematical concepts per MPEP §2106.04(a)(2)(I)); (d) determining the evaluation result based on the software-generated analysis result and the adjusted analysis result (abstract idea as a mental process as a human mind is capable of evaluating differences in results at least by using pen and paper per MPEP §2106.04(a)(2)(III)).
Regarding claims 18 and 25, these claims recite the following limitations which are found to be abstract ideas not reciting a practical application or significantly more, with claim 18 being exemplary:
A method of quality control for software-analyzed images, comprising:
wherein the at least one corresponding software-analyzed image is generated by analyzing the at least one input image with an analysis software (abstract idea as a mental process as a human mind is capable of analyzing an image and then generating an image using pen and paper per MPEP §2106.04(a)(2)(III));
(b) generating, by at least one difference model, at least one set of predicted difference parameters based on the at least one input image (abstract idea as a mental process as a human mind is capable of generating differences in parameter values using pen and paper per MPEP §2106.04(a)(2)(III));
(c) generating at least one set of geometric parameters from the at least one corresponding software-analyzed image (abstract idea as a mental process as a human mind is capable of generating geometric parameters using pen and paper per MPEP §2106.04(a)(2)(III)); and
(d) generating, by an evaluation model, a predicted evaluation result based on the at least one set of predicted difference parameters and the at least one set of geometric parameters (abstract idea as a mental process as a human mind is capable of making evaluations based on parameters per MPEP §2106.04(a)(2)(III)).
This judicial exception is not integrated into a practical application for the following reasons. Claims 18 and 25 all recite the additional element of the step of “(a) receiving at least one input image and at least one corresponding software-analyzed image,” which while not necessarily being an abstract idea, is an insignificant extra solution activity since they are merely data gathering and data output (see MPEP §2106.05(g)). Moreover, these elements amount to receiving and outputting data in a computer based system and are well understood, routine, conventional activity. See MPEP 2106.05(d), subsection II.
Claims 18 and 25 further recite additional elements: “analysis software,” “difference model” and “evaluation model” and claim 25 is directed towards A non-transitory computer-readable medium having stored thereon a set of instructions that are executable by a processor of a computer system. These additional elements are not sufficient to recite a practical application of the abstract ideas recited in claims 18 and 25 as they amount to mere generic computer elements and thus amount to no more than a recitation of the words "apply it" (or an equivalent) or are no more than mere instructions to implement an abstract idea or other exception on a computer. see MPEP §2106.05(f).
Further, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because when considered separately and in combination, the above recited additional elements from claims 18 and 25 do not add significantly more (also known as an “inventive concept”) to the exception. Rather, the additional elements disclosed above perform well-understood, routine, conventional computer functions as recognized by the court decisions listed in MPEP § 2106.05(d).
Regarding claim 19, the limitations are merely directed towards further abstract ideas, specifically mental processes per MPEP §2106.04(a)(2)(I) because the human mind is practically capable of processing an image.
Regarding claim 20, the input image being an echocardiographic image is simply an extension of the abstract ideas recited in claim 18, without being a practical application or significantly more.
Regarding claims 21–22, the recited evaluation result being an error value “indicating” something in particular is simply an extension of the abstract ideas recited in claim 18, without being a practical application or significantly more.
Regarding claim 23, further detail regarding the recited at least one corresponding software-analyzed image is simply an extension of the abstract ideas recited in claim 18, without being a practical application or significantly more.
Regarding claim 24, further detail regarding the recited the at least one input image, the at least one corresponding software-analyzed image, the at least one difference model, the at least one set of predicted difference parameters, and the at least one set of geometric parameters are simply extensions of the abstract ideas recited in claim 18, without being a practical application or significantly more.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1–6, 10–14, 17 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Sirjani et al., “Automatic cardiac evaluations using a deep video object segmentation network,” Insights Imaging 13, 69 (April 8, 2022), https://doi.org/10.1186/s13244-022-01212-9 (herein “Sirjani”) in view of Cheng et al., WIPO PCT Application Publication No. WO 2023/102749A1, with reference to provided machine English language translation (herein “Cheng”).
Regarding claim 1, with deficiencies noted in square brackets [], Sirjani teaches a method of training a difference model (Sirjani pages 3–4, figs. 1 and 2, training of the EchoRCNN (model)) to generate difference parameters related to the differences between a software-tracked contour and an adjusted contour (Sirjani pages 3-4, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour)), comprising:
training a first machine learning model with multiple first training data sets, each of the multiple first training data sets comprises a first training image set as the input for training, and a difference parameter set as [the target for training] (Sirjani pages 3–4, fig. 1, training the network using LV and RV datasets, the datasets described on pages 2–3 as being 2D echocardiography frames from a video, and where one of the inputs to the segmentation subnet is the output of the regression subnet, described on page 4 as being a predicted offset (difference) between anchor points A), wherein the first training image set and the difference parameter set are generated by the steps of:
(a) obtaining the first training image set by selecting at least one image (Sirjani page 2, collection of 2D echocardiography series were prepared from selecting 750 view series with proper LV (left ventricle) shapes);
(b) generating, by an analysis software, the software-tracked contour based on the first training image set (Sirjani page 2, delineation (software-tracked contour) upon frames of each view series was performed using the Auto 2D Quantification (a2DQ) tool in the Qlab Cardiac Analysis (analysis software));
(c) obtaining the adjusted contour (Sirjani page 2, users manually refine the points on the walls to correct the estimated region for LV); and
(d) obtaining the difference parameter set based on the software-tracked contour and the adjusted contour (Sirjani page 4, the four outputs of the regression subnet is an offset (difference parameter) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (adjusted contour)).
While Sirjani teaches that offsets are predicted by a regression subnet and input into the segmentation subject, Sirjani does not explicitly teach that the offsets are “the target for training.”
Cheng teaches a difference parameter set as the target for training (Cheng pages 11–12, a difference model is constructed based on point differences including gray value differences, brightness differences, resolution differences (parameter sets) respectively corresponding to a plurality of position points, the model being constructed through regression analysis).
Therefore, taking the teachings of Sirjani and Cheng together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the regression subnet of Sirjani to include receiving difference value sets to train on as disclosed in Cheng, at least because doing so would allow for combining different imaging methods to leverage the advantages of each imaging method (see Cheng pages 1–2, and 7).
Regarding claim 2, Sirjani teaches wherein each image in the the first training image set is an echocardiographic image (Sirjani pages 2–4, training using LV and RV datasets from 2D echocardiography series).
Regarding claim 3, Sirjani teaches wherein each image of the first training image set is processed according to the software-tracked contour before used as the input for training (Sirjani pages 2–3, Fig. 1, delineation (software-tracked contour) was performed using the Auto 2D Quantification (a2DQ) tool as a first step before pre-processing and inputting into the main EchoRNN network).
Regarding claim 4, Sirjani teaches wherein the first machine learning model is a regression model based on convolutional neural network (Sirjani page 4, Figs. 1 and 2, regression subnet with convolutional layers like the classification subnet).
Regarding claim 5, Sirjani teaches wherein the first machine learning model is a residual neural network (ResNet) model (Sirjani Fig. 3, page 5, “box subnet” which is the regression subnet comprising a ResNet50 backbone).
Regarding claim 6, with deficiencies noted in square brackets [], Sirjani teaches a method of training an evaluation model (Sirjani pages 3–4, figs. 1 and 2, training of the EchoRCNN (model)) to generate predicted evaluation errors related to the differences between software-generated analysis result and adjusted analysis result (Sirjani pages 3-4, fig. 1, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour) which has its outputs combined with a classification subnet), comprising: training a second machine learning model with multiple second training data sets, each of the multiple second training data sets comprises at least one difference parameter set as inputs for training (Sirjani page 3, Fig. 1, outputs of both the reference stream with the user annotated/adjusted images, and the main stream with the current image of a video sequence with a previously predicted mask are fed into the same network, thus both streams comprising a set of data with differences in parameters), at least one geometric parameter set as inputs for training (Sirjani pages 3–4, five feature pyramids as feature maps corresponding to anchor points (geometric) of an object are generated and input to the classification and regression subnets), and an evaluation result as target for training (Sirjani page 3, Fig. 1, previously predicted mask (result from segmentation subnet) is upstream of the classification and regression subnets, and thus becoming an input for training), wherein:
each of the at least one difference parameter set [indicates the differences] between a software-tracked contour and an adjusted contour (Sirjani page 3, Fig. 1, outputs of both the reference stream with the user annotated/adjusted images, and the main stream with the current image of a video sequence with a previously predicted mask are fed into the same network, thus both streams comprising a set of data with differences in parameters);
each of the at least one geometric parameter set is calculated based on the software-tracked contour generated by an analysis software (Sirjani pages 2–4, image data input into the feature pyramid network to obtain the five feature pyramids as feature maps originates from a reference images and masks determined by quantification software Qlab Cardiac Analysis which performs delineation (software-tracked contour)); and
the evaluation result is determined based on the differences between a software-generated analysis result and an adjusted analysis result (Sirjani pages 3–4, fig. 1, regression subnet like the classification subnet, and regressing the offset from each anchor box to a close ground truth to predict the offset between the anchor and the ground truth box, where the data analyzed by the regression subnet includes the Qlab Cardiac Analysis delineated images and user refined segmentation images as well as the previously predicted segmentation images from the segmentation network).
While Sirjani teaches that both a user annotated image stream of features and a segmentation subnet determined image of features are input to the classification subnet and regression subnet at the same time, thus there would be differences between the two streams, nonetheless, Sirjani does not explicitly teach that these two streams “indicate the differences” between them.
Cheng teaches indicate the differences (Cheng pages 11–12, a difference model is constructed based on point differences including gray value differences, brightness differences, resolution differences respectively corresponding to a plurality of position points, between the target image and reference image).
Therefore, taking the teachings of Sirjani and Cheng together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the regression and classification subnets of Sirjani to include receiving difference values as disclosed in Cheng, at least because doing so would allow for combining different imaging methods to leverage the advantages of each imaging method (see Cheng pages 1–2, and 7).
Regarding claim 10, Sirjani teaches wherein each of the at least one geometric parameter set is generated by the steps of (Sirjani pages 3–4, five feature pyramids as feature maps corresponding to anchor points (geometric) of an object are generated and input to the classification and regression subnets, where figs. 1–2 illustrate the upstream processing which includes): (a) generating, by the analysis software, a software-tracked contour from at least one second training image (Sirjani pages 2–4, reference images and masks determined by quantification software Qlab Cardiac Analysis which performs delineation (software-tracked contour)); and (b) calculating one of the at least one geometric parameter sets based on the software-tracked contour (Sirjani pages 3–4, figs. 1–2, feature pyramids generated from the input images including the Qlab delineated image).
Regarding claim 11, Sirjani teaches wherein the second training image is an echocardiographic image (Sirjani pages 2–4, training using LV and RV datasets from 2D echocardiography series).
Regarding claim 12, Sirjani teaches wherein each of the at least one difference parameter set is generated by the steps of (Sirjani pages 3-4, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour) which has its outputs combined with a classification subnet, where figs. 1–2 illustrate the upstream processing which includes): (a) obtaining a second training image set comprising at least one image (Sirjani page 2, collection of 2D echocardiography series were prepared from selecting 750 view series with proper LV (left ventricle) shapes, from which the Qlab Cardiac Analysis delineation (contour) and user refined contour images are derived); and (b) generating, by a difference model, one of the at least one difference parameter set based on the second training image set (Sirjani page 4, the four outputs of the regression subnet is an offset (difference parameter) between A anchors from spatial positions of an object from a classification subnet, and a ground truth from a manually user refined contour).
Regarding claim 13, Sirjani teaches wherein each of the at least one difference parameter set is generated by the steps of (Sirjani pages 3-4, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour) which has its outputs combined with a classification subnet, where figs. 1–2 illustrate the upstream processing which includes): (a) generating, by the analysis software, the software-tracked contour from at least one image set (Sirjani page 2, delineation (software-tracked contour) was performed using the Auto 2D Quantification (a2DQ) tool in the Qlab Cardiac Analysis (analysis software)); (b) obtaining an adjusted contour (Sirjani page 2, users manually refine the points on the walls to correct the estimated region for LV); and (c) calculating one of the at least one difference parameter set based on the software-tracked contour and the adjusted contour (Sirjani page 4, the four outputs of the regression subnet is an offset (difference parameter) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (adjusted contour)).
Regarding claim 14, Sirjani teaches wherein the at least one geometric parameter set comprises an ED (end-diastolic) geometric parameter set and an ES (end-systolic) geometric parameter set (Sirjani pages 3–4, fig. 1, data which upstream is generated as feature pyramids including anchor points to contours, where in the second to last processing step, end-diastolic frames are detected apart from the end-systolic frames, thus the upstream data containing geometric parameters capable of distinguishing ED from ES images); and the ED geometric parameter set and the ES geometric parameter set are generated by the steps of: (a) obtaining an ED training image set by selecting at least one ED image; (b) obtaining an ES training image set by selecting at least one ES image (Sirjani page 2, fig. 1, collection of 2D echocardiography series were prepared from selecting 750 view series with proper LV (left ventricle) shapes, which are later processed to be either an ES image or ED image, thus both types of images are obtained by selection); (c) generating, by the analysis software, a tracked ED contour based on the selected at least one ED image; (d) generating, by the analysis software, a tracked ES contour based on the selected at least one ES image (Sirjani page 2, delineation (software-tracked contour) was performed using the Auto 2D Quantification (a2DQ) tool in the Qlab Cardiac Analysis (analysis software), where fig. 1 illustrates that downstream the image is evaluated for either being an ES or ED image, thus the contours present in the image being either an ES or ED contour); (e) calculating the ED geometric parameter set based on the tracked ED contour; and (f) calculating the ES geometric parameter set based on the tracked ES contour (Sirjani pages 2–4, image data input into the feature pyramid network to obtain the five feature pyramids as feature maps originates from a reference images and masks determined by quantification software Qlab Cardiac Analysis which performs delineation (software-tracked contour), where downstream as shown in fig. 1, the images’ contours are evaluated to be ES or ED, thus the feature pyramids and anchors being ED and ES parameter sets as well).
Regarding claim 17, Sirjani teaches wherein the evaluation result is generated by the steps of: (a) calculating a software-generated analysis result based on the tracked ED contour and the tracked ES contour (Sirjani pages 6–7, fig. 1, downstream from the processed contours of the images, the length and area of the ventricles are determined to detect the ES and ED frames and then to calculate EF (a software-generated analysis result)); (b) obtaining an adjusted ED contour and an adjusted ES contour (Sirjani page 2, users manually refine the points on the walls to correct the estimated region for LV, where fig. 1 illustrates that downstream the images are determined to be ES or ED, therefore the manual refinement upstream is an adjustment to an ES or ED contour also); (c) calculating an adjusted analysis result based on the adjusted ED contour and the adjusted ES contour (Sirjani page 4, the four outputs of the regression subnet is an offset (difference parameter) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (adjusted contour), where fig. 1 illustrates that downstream the images are determined to be ES or ED, therefore the manual refinement upstream is an adjustment to an ES or ED contour also); (d) determining the evaluation result based on the software-generated analysis result and the adjusted analysis result (Sirjani pages 6–7, fig. 1, downstream from the processed contours of the images (software-generated analysis result) including the manual refinement (adjusted analysis result), the length and area of the ventricles are determined to detect the ES and ED frames and then to calculate EF (evaluation result)).
Regarding claim 21, Sirjani teaches the difference between a software-generated analysis result and an adjusted analysis result (Sirjani pages 3-4, fig. 1, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour) which has its outputs combined with a classification subnet), but does not explicitly teach wherein the evaluation result is an error value indicating. Cheng teaches wherein the evaluation result is an error value indicating (Cheng pages 11–12, a difference model is constructed based on point differences including gray value differences, brightness differences, resolution differences respectively corresponding to a plurality of position points, between the target image and reference image).
Therefore, taking the teachings of Sirjani and Cheng together as a whole, it would have been obvious to a person having ordinary skill in the art (herein “PHOSITA”) before the effective filing date of the claimed invention to have modified the regression and classification subnets of Sirjani to include receiving difference values as disclosed in Cheng, at least because doing so would allow for combining different imaging methods to leverage the advantages of each imaging method (see Cheng pages 1–2, and 7).
Claims 18–20, and 22–25 are rejected under 35 U.S.C. 103 as being unpatentable over Sirjani.
Regarding claims 18 and 25, Sirjani teaches [Claim 18: a method of quality control for software-analyzed images, comprising (Sirjani pages 2–3, fig. 1, ventricle assessment using an EchoRCNN architecture including a manual refinement of contouring of ventricle borders to correct the regions estimated by analysis software, thus controlling for quality)][Claim 25: A non-transitory computer-readable medium having stored thereon a set of instructions that are executable by a processor of a computer system to carry out a method of (Sirjani page 6, system implemented on a system with 16GB of RAM and an Intel Xeon GPU in a Python environment with Tensorflow 2)]:
(a) receiving at least one input image and at least one corresponding software-analyzed image, wherein the at least one corresponding software-analyzed image is generated by analyzing the at least one input image with an analysis software (Sirjani page 2, collection of 2D echocardiography series were prepared from selecting 750 view series with proper LV (left ventricle) shapes, then delineated using Qlab Cardiac Analysis software);
(b) generating, by at least one difference model, at least one set of predicted difference parameters based on the at least one input image (Sirjani page 4, the four outputs of the regression subnet is a predicted offset (difference parameter) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (adjusted contour));
(c) generating at least one set of geometric parameters from the at least one corresponding software-analyzed image (Sirjani pages 3–4, five feature pyramids as feature maps corresponding to anchor points (geometric) of an object are generated and input to the classification and regression subnets); and
(d) generating, by an evaluation model, a predicted evaluation result based on the at least one set of predicted difference parameters and the at least one set of geometric parameters (Sirjani pages 6–7, fig. 1, downstream from the processed contours of the images including the regression and classification subnets, the length and area of the ventricles are determined to detect the ES and ED frames and then to calculate EF (a predicted evaluation result)).
While Sirjani as set forth above teaches software-analyzed images from the delineation of the Qlab Cardiac Analysis software, and finding a EF result downstream, these two portions of Sirjani come from different teachings 1) the training of the EchoRCNN model; and 2) the implementation of the model. Therefore, to the extent integrating the teachings from a training portion with an implementation portion is not anticipatory, nonetheless, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the system operations disclosed in Sirjani to include both training and implementation in one process at least because doing so would be combinable by a PHOSITA simply by concatenating the training processing algorithm to the implementation algorithm, with predictable results. Therefore, such a combination would be combining prior art elements according to known methods to yield predictable results. see MPEP §2143(I)(A).
Regarding claim 19, Sirjani teaches before generating the at least one set of predicted difference parameters further comprising: processing the at least one input image based on the at least one software-analyzed image (Sirjani pages 2–3, Fig. 1, delineation (software-tracked contour) was performed using the Auto 2D Quantification (a2DQ) tool as a first step before pre-processing and inputting into the main EchoRNN network including the regression subnet).
Regarding claim 20, Sirjani teaches wherein the input image is an echocardiographic image (Sirjani pages 2–4, training using LV and RV datasets from 2D echocardiography series).
Regarding claim 22, Sirjani teaches wherein the evaluation result is a class indicating a good quality or bad quality of a software-generated analysis result (Sirjani page 4, figs. 1–2, classification subnet predicts probability (quality) of an object (contour) being at each spatial position for A anchors, thus indicating the quality of the software analysis result to a ground truth mask).
Regarding claim 23, Sirjani teaches wherein the at least one corresponding software-analyzed image includes a software-tracked contour (Sirjani pages 2–4, image data analyzed by quantification software Qlab Cardiac Analysis which performs delineation (software-tracked contour)).
Regarding claim 24, Sirjani teaches wherein: the at least one input image comprises at least one ED input image and at least one ES input image (Sirjani page 2, fig. 1, collection of 2D echocardiography series were prepared from selecting 750 view series with proper LV (left ventricle) shapes, which are later processed to be either an ES image or ED image, thus both types of images are obtained by selection); the at least one corresponding software-analyzed image comprises an ED corresponding image and an ES corresponding image (Sirjani page 2, delineation (software-analyzed) was performed using the Auto 2D Quantification (a2DQ) tool in the Qlab Cardiac Analysis, where fig. 1 illustrates that downstream the image is evaluated for either being an ES or ED image, thus the contours present in the image being either an ES or ED contour); the at least one difference model comprises an ED difference model and an ES difference model (Sirjani pages 3-4, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour), where fig. 1 illustrates that downstream the image is evaluated for either being an ES or ED image, thus the differences present being either an ES or ED difference); the at least one set of predicted difference parameters comprises one set of predicted ED difference parameters and one set of predicted ES difference parameters (Sirjani page 4, the four outputs of the regression subnet is an offset (difference parameter) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (adjusted contour), which fig. 1 illustrates that downstream the image is evaluated for either being an ES or ED image, thus the differences present being either an ES or ED difference parameter); and the at least one set of geometric parameters comprises one set of ED geometric parameters and one set of ES geometric parameters (Sirjani pages 3–4, fig. 1, data which upstream is generated as feature pyramids including anchor points to contours, where in the second to last processing step, end-diastolic frames are detected apart from the end-systolic frames, thus the upstream data containing geometric parameters capable of distinguishing ED from ES images).
While Sirjani teaches one regression subnet outputting predicted offset (difference) values for images that are later determined to be ED or ES images, Sirjani does not teach two regression subnets, for two different difference models as claimed. However, such a modification to Sirjani to have two regression subnets would have been obvious to a PHOSITA before the effective filing date of the claimed invention at least because doing so would be a simple duplication of parts. See MPEP §2144.04(VI)(B).
Claims 7–9 are rejected under 35 U.S.C. 103 as being unpatentable over Sirjani in view of Cheng, as set forth above regarding claim 6 from which claims 7–9 depend, further in view of Haeusser et al., US Patent Application Publication No. US 2020/0345261 A1 (herein “Haeusser”).
Regarding claim 7, Sirjani as modified above teaches the second machine learning model, but does not explicitly teach that it is a tree-based model. Haeusser teaches in ¶274 a regression model that is a decision tree regression. Therefore, taking the teachings of Sirjani as modified by Cheng above, and Haeusser together as a whole, it would have been obvious to a PHOSITA before the effective filing date of the claimed invention to have modified the regression model of Sirjani to be a decision tree regression as disclosed in Haeusser at least because doing so would allow for predictions to be made for multiple types of heart structures simultaneously. See Haeusser ¶278.
Regarding claim 8, Sirjani teaches wherein the second machine learning model is a regression model (Sirjani page 4, Figs. 1 and 2, regression subnet with convolutional layers like the classification subnet), and the evaluation result is an error value indicating the difference between the software-generated analysis result and the adjusted analysis result (Sirjani pages 3-4, fig. 1, the EchoRCNN including a regression network which predicts (generates) offsets (difference) between A anchors from spatial positions of an object from a classification subnet (software-tracked contour), and a ground truth from a manually refined border (contour) which has its outputs combined with a classification subnet).
Regarding claim 9, Sirjani teaches wherein the second machine learning model is a classification model (Sirjani page 4, figs. 1–2, classification subnet), and the evaluation result is a class indicating a good quality or bad quality of the software-generated analysis result (Sirjani page 4, figs. 1–2, classification subnet predicts probability (quality) of an object (contour) being at each spatial position for A anchors, thus indicating the quality of the software analysis result to a ground truth mask).
Allowable Subject Matter
Claims 15 and 16 would be allowable if rewritten or persuasively argued to overcome the rejections under 35 U.S.C. 101, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the closest cited art of record includes Sirjani, as applied above to claim 14 from which claims 15 and 16 depend. While Sirjani teaches evaluating echocardiogram images in consideration of manual adjustments to a computer-generated contour, and finding offset/differences values from these images, Sirjani does not explicitly teach that the offsets/differences are two specific ED and ES difference parameter sets each generated by their own model, based on specific ED and ES training sets, together with all of the other limitations recited by independent claim 6 and intervening claim 14. Therefore, no combination of the cited art of record, whether considered alone, or in a combination obvious to a PHOSITA, teach or suggest the limitations of claims 15 and 16.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Kim et al., US Patent Application Publication No. US 2024/0281969 A1, directed towards analyzing medical images into regions of interest, including analyzing for correction parameters and outputting error information.
Sun et al., US Patent Application Publication No. US 2021/0125331 A1, directed towards determining shape parameters of an input image including shape variances.
Applicant's amendment necessitated any potentially interpreted new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MICHELLE M. KOETH
Primary Examiner
Art Unit 2671
/MICHELLE M KOETH/Primary Examiner, Art Unit 2671