Prosecution Insights
Last updated: April 19, 2026
Application No. 18/818,311

APPARATUS AND METHOD FOR GENERATING A THREE-DIMENSIONAL (3D) MODEL WITH AN OVERLAY

Non-Final OA §103§DP
Filed
Aug 28, 2024
Examiner
HA, ALICIA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Anumana, Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
12 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
3.6%
-36.4% vs TC avg
§103
67.9%
+27.9% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.7%
-29.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification In paragraph 0009, line 5, “networks based estimation” should be “networks-based estimation”. In paragraph 0016, line 10, “an animal models” should be “an animal model”. In paragraph 0053, line 1, “Stil referring to FIG. 1” should be “Still referring to FIG. 1”. In paragraph 0076, lines 13-14, “a bold lines” should be “a bold line”. In paragraph 0096, line 13, “a processes” should be “a process”. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 6-11, and 16-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4-6, 9, 11, 16-18, 21, and 23 of U.S. Patent No. 12,462,478 in view of Azizian et al. (US 12211609 B1, hereinafter Azizian). Below is a limitation mapping between claim 1 of the current application and claim 1 of U.S. Patent No. 12,462,478. Application U.S. Patent No. 12,462,478 An apparatus for generating a three-dimensional (3D) model with an overlay, wherein the apparatus comprises: An apparatus for generating a three-dimensional (3D) model of cardiac anatomy with an overlay, wherein the apparatus comprises: at least a processor; at least a processor; and a memory communicatively connected to the at least a processor, and a memory communicatively connected to the at least a processor, wherein the memory contains instructions configuring the at least a processor to: receive a set of ultrasonic images of an organ of a subject; wherein the memory contains instructions configuring the at least a processor to: receive a set of images of a cardiac anatomy pertaining to a subject, wherein receiving the set of images of the cardiac anatomy comprises extracting the set of images of the cardiac anatomy from a patient profile; generate a set of shape parameters representing the organ’s shape as a function of the set of ultrasonic images and a shape identification model trained on a training dataset comprising historical ultrasonic images correlated with historical computed tomography scan data; generate a set of shape parameters based on the set of images, wherein generating the set of shape parameters comprises generating the set of shape parameters as a function of the set of images and a shape identification model; generate a 3D model of the organ based on the set of shape parameters; generate a 3D model of the cardiac anatomy based on the set of shape parameters, wherein generating the 3D model includes transforming the 3D model as a function of a plurality of mode changers within a statistical shape model; generate a map by determining a level of uncertainty at each location of a plurality of locations on the 3D model; generate a map by determining a level of uncertainty at each location of a plurality of locations on the generated 3D model, wherein the map comprises a color-coded heatmap based on one or more levels of uncertainty, wherein each level of the one or more levels of uncertainty is assigned to at least an uncertainty category comprising a pixel-wise uncertainty associated with individual pixels in at least one image of the set of images; and overlay the map onto the 3D model. and overlay the map onto the 3D model. Although the claims at issue are not identical, they are not patentably distinct from each other because claim 1 of the U.S. Patent No. 12,462,478 includes all of the limitations of claim 1 of the current limitation, with the exception of a set of ultrasonic images of an organ of a subject (current claim 1) instead of a set of images of a cardiac anatomy pertaining to a subject (Patent claim 1), and a shape identification model trained on a training dataset comprising historical ultrasonic images correlated with historical computed tomography scan data (current claim 1). In the same art of training neural networks using ultrasonic images, Azizian teaches receive a set of ultrasonic images of an organ of a subject to be analogous to a set of images of a cardiac anatomy pertaining to a subject ([col. 7, lines 35-46] “In at least one embodiment, such segmentation can be used with medical images… this can include computerized tomography (CT) and/or magnetic resonance imaging (MRI) images, histopathologic images, as well as data from ultrasound scans or other such processes… this can include identifying and parsing anatomical objects (e.g. organs, bones, or tumors) in 2D, 3D, or 4D medical images.”) and a shape identification model trained on a training dataset comprising historical ultrasonic images correlated with historical computed tomography scan data ([col. 4, lines 25-30] “In at least one embodiment, a number of different 2D slices can be generated from a single 3D image, such as 3D CT scan data, and multiple regions selected from each 2D slice, which can be used to synthesize several different ultrasound images, which can each then be used as training data (or for other such purposes)”). Azizian further teaches that the training dataset correlates historical ultrasound images with historical CT scan data to create accurate and valid training data in an easier and less expensive manner ([col. 1, lines 14-22] “Various diagnostic approaches, such as those that utilize machine learning, can benefit from the use of a large set of accurately labeled training data. For data relating to radiological healthcare data, such as ultrasound image data, such accurately labeled training data can be difficult and expensive to generate or obtain.”) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Azizian to the patented claims. As claim 11 of the current application has substantially similar limitations to claim 1 in the current application but in a method form, in the same manner as claim 13 in the patent having substantially similar limitations to claim 1 in the patent but in a method form, claim 11 of the current application is rejected under the same rationale. The rest of the comparisons can be seen in the tables below: Current Application 1 6 7 8 9 10 11 U.S. Patent No. 12,462,478 1 4 5 6 9 11 13 Part 1 of claim mapping between the current application and U.S. Patent No. 12,462,478 Current Application 16 17 18 19 20 U.S. Patent No. 12,462,478 16 17 18 21 23 Part 2 of claim mapping between the current application and U.S. Patent No. 12,462,478 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 6-8, 10-14, and 16-18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Weber et al. (US 2020/0074664 A1, hereinafter Weber), in view of Amyot et al. (US 2012/0128218 A1, hereinafter Amyot), and further in view of Azizian et al. (US 12211609 B1, hereinafter Azizian). Regarding claim 1, Weber teaches an apparatus for generating a three-dimensional (3D) model with an overlay, wherein the apparatus comprises: ([0039] “Such an ultrasound image processing apparatus therefore is configured to develop and/or implement the model for estimating the 3-D anatomical body measurement value of interest from one or more 2-D ultrasound images obtained by the ultrasound image processing apparatus”, where “The graphics processor 50 can also generate graphic overlays for display with the ultrasound images, such as the overlay of the heart model 1 over a cardiac ultrasound image to which the heart model 1 is mapped.” [0064]) at least a processor; ([0039] “According to another aspect, there is provided an ultrasound image processing apparatus comprising a processor arrangement and the aforementioned computer program product, wherein the processor arrangement is adapted to execute said computer readable program instructions.”) and a memory communicatively connected to the at least a processor, ([0038] “According to another aspect, there is provided a computer program product comprising a computer readable storage medium having computer readable program instructions embodied therewith for, when executed on a processor arrangement of an ultrasound image processing apparatus, cause the processor arrangement to implement the method of any of the herein described embodiments.”) wherein the memory contains instructions configuring the at least a processor to: ([0038] “According to another aspect, there is provided a computer program product comprising a computer readable storage medium having computer readable program instructions embodied therewith for, when executed on a processor arrangement of an ultrasound image processing apparatus, cause the processor arrangement to implement the method of any of the herein described embodiments”) receive a set of ultrasonic images of an organ of a subject; ([0072] “In operation 110, a set of 3-D ultrasound images including the anatomical body of interest is provided. Such a set may comprise 3-D ultrasound images of different individuals such as to obtain a set of 3-D ultrasound images including different ‘embodiments’ of the anatomical body of interest”, where “the set of 2-D ultrasound image planes is generated from the 3-D ultrasound images provided in operation 110. A further advantage is that in this manner different sets of 2-D ultrasound image planes can be readily generated from a single 3-D ultrasound image, e.g. image slices relating to a 4-chamber view and a 2-chamber view of a patient's heart.” [0075]) generate a set of shape parameters representing the organ’s shape as a function of the set of ultrasonic images ([0074] “Typically, for each 3-D ultrasound image provided in operation 110, a plurality of 2-D ultrasound image planes is provided such that measurements based on the contour or cross-section of the anatomical body in these 2-D ultrasound image planes are related to a ground truth value of an anatomical measurement of the 3-D anatomical body of interest.”) generate a 3D model of the organ based on the set of shape parameters; ([0062] “The image processor 42 for example may be adapted to map a heart model to a cardiac ultrasound image, e.g. a 2-D image or preferably a 3-D volumetric ultrasound image (or a user-selected slice thereof)”, where “a dataset has been compiled comprising the following elements: the ground truth values of the anatomical body measurement of interest extracted from each of these images, the respective sets of 2-D ultrasound image planes relating to the 3-D ultrasound images, the contour measurements performed on the 2-D ultrasound image planes such as an outline contour measurement and a cross-sectional measurement” [0078]) generate a map by determining a level of uncertainty at each location of a plurality of locations on the 3D model; ([0080] “In a preferred embodiment, the machine learning algorithm trains a further function g (S.sub.n,1, C.sub.n,1, A.sub.n,1, S.sub.n,2, C.sub.n,2, A.sub.n,2, . . . ) in order to estimate an uncertainty in the estimated value of the anatomical body measurement in order to allow a user to assess the reliability of the estimated anatomical body measurement… where the respective sets of 2-D ultrasound image planes contain subsets of 2-D ultrasound image planes sharing the same viewing angle of the anatomical body of interest”) and overlay the map onto the 3D model. ([0064] “The graphics processor 50 can also generate graphic overlays for display with the ultrasound images, such as the overlay of the heart model 1 over a cardiac ultrasound image to which the heart model 1 is mapped.”). While Weber does not explicitly teach a 3D model, as Weber teaches slices of 3D ultrasound images ([0062] “3D volumetric ultrasound image”). However, this is known in the art as taught by Amyot. Amyot is analogous to the claimed invention, as both relate to generating a 3D model of an organ. Amyot further teaches that these 3D slices can be combined to generate a 3D model ([0008] “the method comprising: retrieving from a memory a 3D volume model of the organ, the 3D volume model describing a 3D structure of the organ and a distribution of density within the 3D structure, the 3D structure representing a surface and internal features of the organ; generating a slice of the 3D model according to a position and an orientation of an imaging device, the slice including a cross-section of the surface and the internal features; rendering an image in accordance with the slice; and displaying the image.”). Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Amyot to Weber to teach that a 3D volume model of an organ is constructed from the slices of the 3D model. The combination of Weber and Amyot does teach a shape identification model trained on a training dataset (Weber; [0021] “training a machine-learning algorithm so that it will generate an estimate of the anatomical body measurement value from inputs comprising at least one of an outline contour measurement and a cross-sectional measurement of a contour of the anatomical body that are determined by a user from a 2-D ultrasound image”). However, Weber fails to teach this same shape identification model trained on a training dataset comprising historical ultrasonic images correlated with historical computed tomography scan data. However, this is known in the art as taught by Azizian. Azizian teaches a model trained on a training dataset comprising historical ultrasonic images correlated with historical computed tomography scan data ([col. 4, lines 25-30] “In at least one embodiment, a number of different 2D slices can be generated from a single 3D image, such as 3D CT scan data, and multiple regions selected from each 2D slice, which can be used to synthesize several different ultrasound images, which can each then be used as training data (or for other such purposes)”, where “In at least one embodiment, in an effort to preserve patient confidentiality (e.g., where patient data or records are to be used off-premises)” [col. 87, lines 54-56]. Note: the latter teaches that patient records are used to create the training data as taught by Azizian, therefore, would be historical CT data). Azizian is analogous to the claimed invention, as both relate to generating a dataset of ultrasound images of organs to train a neural network. Azizian further teaches that their invention addresses the issue of insufficient amount of accurate and valid training data for machine learning in diagnostic approaches ([col. 1, lines 17-22] “For data relating to radiological healthcare data, such as ultrasound image data, such accurately labeled training data can be difficult and expensive to generate or obtain. Without a sufficient amount of valid training data, results of diagnostic approaches that rely upon this training data can be limited in accuracy”). Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Azizian to Weber to generate a greater amount of accurate ultrasound images that can be used to train a neural network for healthcare diagnosis. Regarding claim 2, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein the set of ultrasonic images of the organ comprises an image selected from a list consisting of a transesophageal echocardiogram image, a transthoracic echocardiogram image, and a point-of-care ultrasound image. (Weber; [0048] “FIG. 1 shows a schematic illustration of an ultrasound system 100, in particular a medical two-dimensional (2-D) or three-dimensional (3-D) ultrasound imaging system… The ultrasound system 100 comprises an ultrasound probe 14 having at least one transducer array having a multitude of transducer elements for transmitting and/or receiving ultrasound waves.”, where “the probe 14 may be a transesophageal echocardiography (TEE) probe or a transthoracic echocardiography (TTE) probe.” [0048], and “The ultrasound image processing apparatus 10 may comprise a processor arrangement 16 including an image reconstruction unit that controls the provision of a 2-D or 3-D image sequence via the ultrasound system 100.” [0050]). Regarding claim 3, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein: the memory contains instructions configuring the at least a processor to identify the training dataset; (Weber; [0092] “computer readable program instructions embodied on a computer readable storage medium having, when executed on a processor arrangement 16, cause the processor arrangement to implement the method 100 and/or 200”) the memory contains instructions configuring the at least a processor to train the shape identification model on the training dataset; (Weber; [0079] “This dataset is provided as inputs to a machine learning algorithm in operation 150, which machine learning algorithm is trained by this dataset in order to train an estimator function”) and identifying the training dataset comprises correlating an instance of computed tomography scan data with a historical ultrasonic image as a function of a medical record and a language model (Azizian; [col. 4, lines 25-30] “In at least one embodiment, a number of different 2D slices can be generated from a single 3D image, such as 3D CT scan data, and multiple regions selected from each 2D slice, which can be used to synthesize several different ultrasound images, which can each then be used as training data (or for other such purposes)”, where “In at least one embodiment, in an effort to preserve patient confidentiality (e.g., where patient data or records are to be used off-premises)”, “In at least one embodiment, such a neural network can take as input any map (or other two-or three-dimensional representation) of one or more properties that can help infer a type of output image, such as an ultrasound image or other medical (or non-medical) image.” [col. 4, lines 20-25], and “In at least one embodiment, one or more PPUs 2700 are configured to accelerate High Performance Computing (“HPC”), data center, and machine learning applications. In at least one embodiment, PPU 2700 is configured to accelerate deep learning systems and applications including... real-time language translation” [col. 60, lines 31-43]). Azizian is analogous to the claimed invention, as both relate to generating a dataset of ultrasound images of organs to train a neural network. Azizian further teaches that their invention addresses the issue of insufficient amount of accurate and valid training data for machine learning in diagnostic approaches ([col. 1, lines 17-22] “For data relating to radiological healthcare data, such as ultrasound image data, such accurately labeled training data can be difficult and expensive to generate or obtain. Without a sufficient amount of valid training data, results of diagnostic approaches that rely upon this training data can be limited in accuracy”). Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Azizian to Weber to generate a greater amount of accurate ultrasound images that can be used to train a neural network for healthcare diagnosis. Regarding claim 4, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein: the memory contains instructions configuring the at least a processor to identify the training dataset; (Weber; [0092] “computer readable program instructions embodied on a computer readable storage medium having, when executed on a processor arrangement 16, cause the processor arrangement to implement the method 100 and/or 200”) the memory contains instructions configuring the at least a processor to train the shape identification model on the training dataset; (Weber; [0079] “This dataset is provided as inputs to a machine learning algorithm in operation 150, which machine learning algorithm is trained by this dataset in order to train an estimator function”) and identifying the training dataset comprises generating a synthetic ultrasonic image as a function of an instance of computed tomography scan data (Azizian; [col. 4, lines 25-30] “In at least one embodiment, a number of different 2D slices can be generated from a single 3D image, such as 3D CT scan data, and multiple regions selected from each 2D slice, which can be used to synthesize several different ultrasound images, which can each then be used as training data (or for other such purposes)”). Azizian is analogous to the claimed invention, as both relate to generating a dataset of ultrasound images of organs to train a neural network. Azizian further teaches that their invention addresses the issue of insufficient amount of accurate and valid training data for machine learning in diagnostic approaches ([col. 1, lines 17-22] “For data relating to radiological healthcare data, such as ultrasound image data, such accurately labeled training data can be difficult and expensive to generate or obtain. Without a sufficient amount of valid training data, results of diagnostic approaches that rely upon this training data can be limited in accuracy”). Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date to incorporate the teachings of Azizian to Weber to generate a greater amount of accurate ultrasound images that can be used to train a neural network for healthcare diagnosis. Regarding claim 6, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein the set of shape parameters comprises a plurality of numerical descriptors representing at least a geometric characteristic of the organ (Weber; [0068] “the clinician may evaluate a set of 2-D images, i.e. one or more 2-D images, of the anatomical body of interest… in which cross-sections of the chambers of the heart (right atrium (RA), left atrium (LA), right ventricle (RV), left ventricle (LV)) delimited by their respective contours, as schematically depicted for the LV by the dashed contour, to obtain such measurements… to facilitate the estimation of a 3-D measurement value of the anatomical body of interest, e.g. its volume, from such orthogonal views, as an informed assumption may be made about the overall shape and dimensions of the anatomical body of interest of the orthogonal cross-sections in such x-plane 2-D ultrasound images.”, where “The clinician typically estimates such a 3-D measurement value from 2-D measurements of the cross-sectional views of the anatomical body of interest, such as circumference (contour) length, cross-sectional area and/or largest cross-sectional measurement (diameter) using geometrical assumptions as previously explained” [0068]) Regarding claim 7, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein each shape parameter within the set of shape parameters is associated with a corresponding parameter range (Weber; [0080] “the respective sets of 2-D ultrasound image planes contain subsets of 2-D ultrasound image planes sharing the same viewing angle of the anatomical body of interest (e.g. a 4-chamber view or a 2-chamber view of a human heart) for the respective 3-D ultrasound images, different subsets of such 2-D ultrasound image planes may lead to a range of differences between the estimated value and the ground truth value of the anatomical body measurement of interest, which range of differences may be determined and used to express an uncertainty in the estimation value of the anatomical body measurement of interest as obtained with the estimation function of the model as generated with the machine learning algorithm.", where “the machine learning algorithm trains a further function g (S.sub.n,1, C.sub.n,1, A.sub.n,1, S.sub.n,2, C.sub.n,2, A.sub.n,2, . . . ) in order to estimate an uncertainty in the estimated value of the anatomical body measurement” [0080]). Regarding claim 8, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein receiving the set of ultrasonic images comprises receiving the set of ultrasonic images from a patient profile (Weber; [0086] “In operation 220, the ultrasound image processing apparatus 10 is provided with one or more 2-D ultrasound images, which for example may have been generated using an ultrasound probe 14 for generating such 2-D ultrasound images or which may have been retrieved from the data storage arrangement 60 in which previously captured 2-D ultrasound images have been stored.”, where “Ultrasound images may be used by a clinician to derive information of diagnostic relevance from such images, such as measurements of dimensions of an anatomical body of interest within a patient” [0066]). Regarding claim 10, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, wherein generating the 3D model further comprises generating a second 3D model as a function of the 3D model, by varying the set of shape parameters, wherein the second 3D model is statistically constrained (Weber; [0025] “This furthermore may provide the user with an indication of whether the 2-D ultrasound image acquisition should be repeated along a different viewing angle, e.g. a different scanning direction, in order to reduce the uncertainty in the 3-D anatomical body measurement value obtained with the model” where “In an embodiment, determining a ground truth value of the anatomical body measurement comprises mapping a segmentation model for identifying the anatomical body to said anatomical body within the 3-D ultrasound image; and deriving the ground truth value of the anatomical body measurement from the mapped segmentation model… This may further involve user-operated adjustment of the mapping of the segmentation model onto the anatomical body within the 3-D ultrasound image to further improve this accuracy. The ground truth value then may be obtained e.g. from a 3-D mesh of the volume delimited by the segmentation model.” [0026]. Note: this is interpreted where the image acquisition being repeated with create a second 3D model). Regarding claim 11, claim 11 has substantially similar limitations to claim 1, but in a method form. The combination of Weber, Amyot, and Azizian teaches a method of generating a three-dimensional (3D) model with an overlay (Weber; [Abstract] “The application discloses a computer-implemented method (100) of providing a model for estimating an anatomical body measurement value from at least one 2-D ultrasound image including a contour of the anatomical body”). Claim 12 has substantially similar limitations to claim 2, therefore, will be rejected under the same rationale as claim 2. Claim 13 has substantially similar limitations to claim 3, therefore, will be rejected under the same rationale as claim 3. Claim 14 has substantially similar limitations to claim 4, therefore, will be rejected under the same rationale as claim 4. Claim 16 has substantially similar limitations to claim 6, therefore, will be rejected under the same rationale as claim 6. Claim 17 has substantially similar limitations to claim 7, therefore, will be rejected under the same rationale as claim 7. Claim 18 has substantially similar limitations to claim 8, therefore, will be rejected under the same rationale as claim 8. Claim 20 has substantially similar limitations to claim 10, therefore, will be rejected under the same rationale as claim 10. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Weber (US 2020/0074664 A1), in view of Amyot (US 2012/0128218 A1) and Azizian (US 12211609 B1), and further in view of Urman et al. (US 20220133261 A1, hereinafter Urman). Regarding claim 5, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, but fails to teach wherein the memory contains instructions configuring the at least a processor to determine a Left Atrial Appendage Occlusion Device placement as a function of the 3D model. However, this is known in the art as taught by Urman. Urman teaches wherein the memory contains instructions configuring the at least a processor to determine a Left Atrial Appendage Occlusion Device placement as a function of the 3D model ([0022] “Generating a 3D anatomical model (e.g., anatomical map) of the relevant region (e.g., a region encompassing the septum and the LAA)”, [0023] “Defining a landing site for the LAA device… taking into account the type of occlusion device and the LAA shape. Such a landing site is defined, for example, by delineating an ostium of the LAA in a form of a closed curve over the 3D anatomical model.”, where “In an embodiment, the medical device is an LAA occlusion device.” [0007]. Note: LAA is the acronym for Left Atrial Appendage). Urman is analogous to the claimed invention, as both relate to generating a 3D model of an organ using historical ultrasound data ([0002] “Using ultrasound imaging allows for anatomical modeling over time (e.g., throughout a heart cycle)”). Urman further teaches that “The personalized models and a model of one or more closure devices are used to select a closure device for the patient, appropriate for the entire heart cycle and to guide placement of the selected closure device during an implantation.” [0002]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Urman to the combination of Weber, Amyot, and Azizian to generate a 3D model to be personalized for the patient and to guide the placement of the Left Atrial Appendage when implanting. Claim 15 has substantially similar limitations to claim 5, therefore, will be rejected under the same rationale as claim 5. Claims 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Weber (US 2020/0074664 A1), in view of Amyot (US 2012/0128218 A1) and Azizian (US 12211609 B1), and further in view of Wipperman et al. (US 20220199245 A1, hereinafter Wipperman). Regarding claim 9, the combination of Weber, Amyot, and Azizian teaches the apparatus of claim 1, but fails to teach wherein the map comprises a color-coded heat map configured to visualize one or more areas of uncertainty on the 3D model. However, this is known in the art as taught by Wipperman. Wipperman teaches wherein the map comprises a color-coded heat map configured to visualize one or more areas of uncertainty on the 3D model ([0195] “FIG. 34 shows Z scores in a heat map 3400 for standard deviation (SD) during morning collections. The heat map 3400 shows the Z score of standard deviation for features 3400A during tasks 3400B, based on legend 3400C, which range in values from −3 to +3 and assigned a color. FIG. 35 shows Z scores in a heat map 3500 for SD during evening collections. The heat map 3500 shows the Z score of standard deviation for features 3500A during tasks 3500B, based on legend 3500C, which range in values from −3 to +3 and assigned a color. The standard deviations indicate the amount of variability in the feature data such that high standard deviation may indicate less reliability whereas a low standard deviation may indicate greater reliability.”). Wipperman is analogous to the claimed invention, as both relate to creating and mapping statistical data of a body part of a patient for diagnosis and treatment. Wipperman further teaches that “there is a need for improved techniques for making assessment, determining diagnoses, and assigning treatments to patients with neuromuscular disorders” [0006], since “[i]dentifying statistical data for providing clinical outcomes (e.g., for clinical trials, for disease or disorder identification, for treatment planning, etc.) is difficult due to the type, volume, and/or depth of available data” [0003]. Therefore, it would be obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teachings of Wipperman to the combination of Weber, Amyot, and Azizian in order to better identify statistical data through visualization of uncertainty using a heat map. Claim 19 has substantially similar limitations to claim 9, therefore, will be rejected under the same rationale as claim 9. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALICIA HA whose telephone number is (571)272-3601. The examiner can normally be reached Mon-Thurs 9:00 AM - 6:00 PM, and Fri 9:00 AM - 1:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /ALICIA HA/Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 28, 2024
Application Filed
Mar 16, 2026
Non-Final Rejection — §103, §DP (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month