Prosecution Insights
Last updated: April 19, 2026
Application No. 18/541,792

METHOD AND SYSTEM FOR SEQUENCE-AWARE ESTIMATION OF ULTRASOUND PROBE POSE IN LAPAROSCOPIC ULTRASOUND PROCEDURES

Non-Final OA §103§112
Filed
Dec 15, 2023
Examiner
MALDONADO, STEVEN
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Edda Technology Inc.
OA Round
1 (Non-Final)
30%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 30% of cases
30%
Career Allow Rate
6 granted / 20 resolved
-40.0% vs TC avg
Strong +54% interview lift
Without
With
+54.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
51 currently pending
Career history
71
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
49.1%
+9.1% vs TC avg
§102
15.9%
-24.1% vs TC avg
§112
25.8%
-14.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 20 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation Claims 1 & 11 recites the limitation of “ if a sequence of previous 3D probe poses exists” and “if the sequence of previous 3D probe poses does not exist” which in an interpretation it may be construed as a conditional limitation where the conditional limitations may not be given a full weight in light of the below decisions as for considering the other case scenario of “if” not being advanced… which the claim would not require this limitation to be a positive recitation. In the recent Ex parte Gopalan decision, the PTAB addressed a claim where all of the features were recited in a conditional manner. A first step of “identifying … an outlier” was performed if “traffic is outside of a prediction interval.” A second step of “identifying” was performed “only when a count of outliers … is greater than or equal to two, and exceeds an anomaly threshold.” These were the only two elements of the independent claim. Thus, if the traffic is never outside Gopalan’s prediction interval, then the steps of the method are never performed. However, the PTAB distinguished Schulhauser and noted that this construction “would render the entire claim meaningless.” Gopalan at p. 5. The Board went on to state, “Although each of these steps is conditional, they are integrated into one method or path and do not cause the claim to diverge into two methods or paths, as in Schulhauser. Thus, we conclude that the broadest reasonable interpretation of claim 1 requires the performance of both steps…” Id. at p. 6.” Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the following limitation “deployed at a three- dimensional (3D) probe pose during a medical procedure” which renders the claim unclear. It is unclear whether the limitation is defining the probe as a 3-dimensional ultrasound probe. For the purposes of this examination it is interpreted as using an ultrasonic probe during a medical procedure. Claim 1 also recites the following limitation “if the sequence of previous 3D probe poses does not exist, obtaining an ASM representation based on at least one ASM pair obtained from the ultrasound image, and estimating the 3D probe pose based on the ASM representation of the ultrasound image via an ASM-pose mapping model that maps the ASM representation to a 3D probe pose.” Which renders the claim unclear. It is unclear what the ASM-pose mapping model is, how the model is generated, and if the model is a presurgical step. Claim 1 also recites the following limitation “obtaining an ASM/label representation of the virtual ultrasound image based on at least one ASM/label pair obtained from the virtual ultrasound image” which renders the claim unclear. It is unclear how an ASM/label representation of the virtual ultrasound can be obtained based on an ASM/label pair of the same virtual ultrasound image, this indicates it is circular and it acquires itself by being based on itself. Claim 3 recites the limitation "a 3D anatomical structure in the 3D model". There is insufficient antecedent basis for this limitation in the claim. Claim 4 recites the limitation " updating the ASM/label representation based on the 3D probe pose" which renders the claim unclear. It is unclear how the 3D probe pose would affect the ASM/label when earlier in the claims it is stated that the ASM/label representation predicts the 3D probe pose. For the purposes of this examination this is interpreted as the next ultrasound image in a sequence of an ultrasound sweep. Claim 6 recites the limitation "updating the 3D probe pose to generate an updated 3D probe pose based on the updated ASM/label representation" which renders the claim unclear. It is unclear how this is working, following the flow of claims the ASM/label helps predict the potential pose of the ultrasonic probe which then is used to update the ASM/label which is finally used to update the ultrasonic probe. There is either a missing verification step to confirm the pose predicted by the ASM is correct or the claims need to be clarified on how the pose is confirmed to be correct. Claim 11 recites the following limitation “deployed at a three- dimensional (3D) probe pose during a medical procedure” which renders the claim unclear. It is unclear whether the limitation is defining the probe as a 3-dimensional ultrasound probe. For the purposes of this examination it is interpreted as using an ultrasonic probe during a medical procedure. Claim 11 also recites the following limitation “if the sequence of previous 3D probe poses does not exist, obtaining an ASM representation based on at least one ASM pair obtained from the ultrasound image, and estimating the 3D probe pose based on the ASM representation of the ultrasound image via an ASM-pose mapping model that maps the ASM representation to a 3D probe pose.” Which renders the claim unclear. It is unclear what the ASM-pose mapping model is, how the model is generated, and if the model is a presurgical step. Claim 11 also recites the following limitation “obtaining an ASM/label representation of the virtual ultrasound image based on at least one ASM/label pair obtained from the virtual ultrasound image” which renders the claim unclear. It is unclear how an ASM/label representation of the virtual ultrasound can be obtained based on an ASM/label pair of the same virtual ultrasound image, this indicates it is circular and it acquires itself by being based on itself. Claim 13 recites the limitation "a 3D anatomical structure in the 3D model". There is insufficient antecedent basis for this limitation in the claim. Claim 14 recites the limitation " updating the ASM/label representation based on the 3D probe pose" which renders the claim unclear. It is unclear how the 3D probe pose would affect the ASM/label when earlier in the claims it is stated that the ASM/label representation predicts the 3D probe pose. For the purposes of this examination this is interpreted as the next ultrasound image in a sequence of an ultrasound sweep. Claim 16 recites the limitation "updating the 3D probe pose to generate an updated 3D probe pose based on the updated ASM/label representation" which renders the claim unclear. It is unclear how this is working, following the flow of claims the ASM/label helps predict the potential pose of the ultrasonic probe which then is used to update the ASM/label which is finally used to update the ultrasonic probe. There is either a missing verification step to confirm the pose predicted by the ASM is correct or the claims need to be clarified on how the pose is confirmed to be correct. Claims 1 & 11 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential steps, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted steps are: generating a 3-dimensional model of the target organ to further generate the simulated ultrasound images from different perspectives of the model (this is described in applicant specification [0036-0037]. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ramalhinho et al (US20220249056A1; hereinafter referred to as Ramalhinho) in view of Yan et al (US20160367216A1; hereinafter referred to as Yan). Regarding Claim 1, Ramalhinho discloses a method (“A computer implemented method is disclosed for identifying a pose of a probe by registering an ultrasound image from with volumetric scan data.” [Abstract]), comprising: receiving an ultrasound image acquired by an ultrasound probe deployed at a three- dimensional (3D) probe pose during a medical procedure involving a target organ (“The ultrasound probe 24 may be a laparoscopic ultrasound probe that is configured to obtain ultrasound data for generating an ultrasound image of an organ during a laparoscopic surgical procedure.” [0066]); detecting each anatomical structure from the ultrasound image and an anatomical structure mask (ASM) thereof (“The processor 26 may be configured to receive the ultrasound data from the probe 24, and determine an ultrasound image therefrom.” [0066], “At step 32, a feature vector is extracted from… the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.” [0056]); if a sequence of previous 3D probe poses exists, representing prior 3D probe poses of the ultrasound probe, predicting the 3D probe pose based on the sequence of previous 3D probe poses (“FIG. 2 illustrates a sequence of steps 40, according to an embodiment of the invention, for determining a sequence of probe poses corresponding with a sequence of ultrasound images obtained by sweeping the probe over tissue, such as an organ, by registering the sequence of ultrasound images with volumetric scan data. At step 41, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 42, a feature vector is extracted from each of the simulated ultrasound images, and from each of the sequence of ultrasound images. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.” [0060-0062]); creating a virtual ultrasound image based on the predicted 3D probe pose, obtaining an ASM representation of the virtual ultrasound image based on at least one ASM pair obtained from the virtual ultrasound image (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]); and if the sequence of previous 3D probe poses does not exist, obtaining an ASM representation based on at least one ASM pair obtained from the ultrasound image, and estimating the 3D probe pose based on the ASM representation of the ultrasound image via an ASM-pose mapping model that maps the ASM representation to a 3D probe pose (“FIG. 1 is a sequence of steps 30 according to an embodiment of the invention, for determining a probe pose corresponding with an ultrasound image by registering the ultrasound image with volumetric scan data. At step 31, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 32, a feature vector is extracted from each of the simulated ultrasound images, and from the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels. At step 33, the feature vector from each simulated ultrasound image is compared with the feature vector from the ultrasound image to determine a distance or similarity value. At step 34, a candidate image is selected as the best match, based on the distance or similarity. At step 35, the pose of the probe is identified from the candidate image” [0054-0059]). Ramalhinho does not specifically disclose estimating a label for each ASM to generate an ASM/label pair. However, in a similar field of endeavor, Yan teaches a system for automatic zone visualization employing an ultrasound probe (and an ultrasound imaging workstation [Abstract]. Yan also teaches estimating a label for each ASM to generate an ASM/label pair (“A stage S54 of flowchart 50 encompasses workstation 30 displaying a zone visualization in real time. Specifically, once the zones are mapped to ultrasound image stream 33, procedurally-defined zone(s) can be visualized over an ultrasound image 33 z when being intersected. The intersected zone(s) are highlighted with a zone label displayed. In addition, different visualized zones may be differentiated with color coding, text labels, or audio feedback. For example, while a set of zones are being intersected by a ultrasound image 33 z, the intersection areas are shown in each corresponding color or label with or without audio feedback.” [0028]) It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Ramalhinho as outlined above with estimating a label for each ASM to generate an ASMIlabel pair as taught by Yan, because it can facilitate an ultrasound-guided visualization of the anatomical structure [0006]. Regarding Claim 2, Ramalhinho discloses the step of predicting comprises: generating a first trajectory of 3D coordinates of the prior 3D probe poses in the sequence (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]); generating a second trajectory of 3D orientations of the prior 3D probe poses in the sequence; generating the predicted 3D probe pose based on the first and the second trajectories (“The mean number of plausible paths 2 for each of the nine sweep registrations vs the number of images is shown in FIG. 8. Since the Viterbi algorithm is recursive on the number of columns in the hidden Markov model (shown in FIG. 6), results are displayed as a function of the number of images used so far in the optimisation (from 2 to 20). FIG. 8 therefore shows the number of kinematically possible paths for N images (i.e. with a non-zero probability, based on the constraints defined above). The number of plausible trajectories found by the algorithm converges to 1 if enough images are used (N=17 in this case).” [0085], multiple trajectories are predicted as being the potentially correct trajectory as more images are acquired the projected trajectory narrows to the closest matching trajectory), wherein the virtual ultrasound image is created based on a 3D model for the target organ in accordance with the predicted 3D probe pose (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]). Regarding Claim 3, Ramalhinho discloses the step of obtaining the at least one ASM from a virtual ultrasound image comprises: identifying each 2D structure in the virtual ultrasound image corresponding to a 3D anatomical structure in the 3D model; generating a mask for the 2D structure to create a corresponding ASM(“FIG. 1 is a sequence of steps 30 according to an embodiment of the invention, for determining a probe pose corresponding with an ultrasound image by registering the ultrasound image with volumetric scan data. At step 31, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 32, a feature vector is extracted from each of the simulated ultrasound images, and from the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels. At step 33, the feature vector from each simulated ultrasound image is compared with the feature vector from the ultrasound image to determine a distance or similarity value.” [0054-0057]) ; Ramalhinho does not specifically disclose assigning a label for the 3D anatomical structure retrieved from the 3D model to the ASM to generate the ASM/label pair. However, in a similar field of endeavor, Yan teaches assigning a label for the 3D anatomical structure retrieved from the 3D model to the ASM to generate the ASM/label pair (“A stage S54 of flowchart 50 encompasses workstation 30 displaying a zone visualization in real time. Specifically, once the zones are mapped to ultrasound image stream 33, procedurally-defined zone(s) can be visualized over an ultrasound image 33 z when being intersected. The intersected zone(s) are highlighted with a zone label displayed. In addition, different visualized zones may be differentiated with color coding, text labels, or audio feedback. For example, while a set of zones are being intersected by a ultrasound image 33 z, the intersection areas are shown in each corresponding color or label with or without audio feedback.” [0028]) It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Ramalhinho as outlined above with assigning a label for the 3D anatomical structure retrieved from the 3D model to the ASM to generate the ASM/label pair as taught by Yan, because it can facilitate an ultrasound-guided visualization of the anatomical structure [0006]. Regarding Claim 4 as interpreted in view of the 112b rejection above, Ramalhinho discloses further comprising updating the ASM/label representation based on the 3D probe pose to generate an updated ASM/label representation for the ultrasound image, wherein the updating comprises: creating a new virtual ultrasound image based on the 3D probe pose in accordance with the 3D model (“FIG. 2 illustrates a sequence of steps 40, according to an embodiment of the invention, for determining a sequence of probe poses corresponding with a sequence of ultrasound images obtained by sweeping the probe over tissue, such as an organ, by registering the sequence of ultrasound images with volumetric scan data. At step 41, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 42, a feature vector is extracted from each of the simulated ultrasound images, and from each of the sequence of ultrasound images. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.” [0060-0062]); extracting one or more virtual ASM/label pairs from the new virtual ultrasound image; for each ASM/label pair generated based on the ultrasound image, Identifying a corresponding virtual ASM/label pair, revising the ASM/label pair from the ultrasound image if it satisfies at least one predetermined criterion with respect to the virtual ASM/label pair; and generating an update ASM/label representation based on the revised ASM/label pair (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065], “In the example described herein, it is implicit that the organ does not deform. In some embodiments, the set of simulated ultrasound images obtained from the volumetric scan may be parameterised to include deformation (e.g. in the y direction). In some embodiments the depth d parameter may represent deformation of the organ in a direction normal to the surface of the organ (rather than a simple translation without deformation). Higher accuracies may be achievable with parameterisation including deformation.” [0096]). Regarding Claim 5, Ramalhinho discloses the step of revising according to at least one predetermined criterion comprises: if an overlap between the ASM in the ASM/label pair and the ASM in the virtual ASM/label pair is not acceptable, removing the ASM/label pair; if the overlap between the ASM in the ASM/label pair and the ASM in the virtual ASM/label pair is acceptable, replacing the label from the ASM/label pair with the label from the virtual ASM/label pair pair (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065], “The mean number of plausible paths 2 for each of the nine sweep registrations vs the number of images is shown in FIG. 8.” [0085], “In the example described herein, it is implicit that the organ does not deform. In some embodiments, the set of simulated ultrasound images obtained from the volumetric scan may be parameterised to include deformation (e.g. in the y direction). In some embodiments the depth d parameter may represent deformation of the organ in a direction normal to the surface of the organ (rather than a simple translation without deformation). Higher accuracies may be achievable with parameterisation including deformation.” [0096], as shown in Fig.8 the more ultrasound images analyzed the less possibilities for the potential number of paths (3D probe poses)). PNG media_image1.png 345 417 media_image1.png Greyscale Regarding Claim 6 in view of the U.S.C. 112b rejection above, Ramalhinho discloses further comprising updating the 3D probe pose to generate an updated 3D probe pose based on the updated ASM/label representation, wherein the step of updating the 3D probe pose comprises: obtaining a new 3D probe pose based on the updated ASM/label representation via the ASM-pose mapping model; if the new 3D probe pose and the 3D probe pose satisfy a configured condition, outputting the new 3D probe pose as the updated 3D probe pose; and if the new 3D probe pose and the 3D probe pose do not satisfy the configured condition, outputting the 3D probe pose as the updated 3D probe pose (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]). Regarding Claim 7, Ramalhinho discloses that the configured condition defines that a deviation between the new 3D probe pose and the 3D probe pose is at an acceptable range (“At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0065], “imposing a transition probability penalty when a probe path direction deviates from an initial direction by more than a threshold amount.” [0024]). Regarding Claim 8, Ramalhinho discloses further comprising adding the updated 3D probe pose to the sequence of previous 3D probe poses of the ultrasound probe (“At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0065], as shown in Fig.8 the more ultrasound images analyzed the less possibilities for the potential number of paths (3D probe poses) and the sequence is optimized for the highest probability sequence). PNG media_image1.png 345 417 media_image1.png Greyscale Regarding Claim 9, Ramalhinho discloses the ASM-pose mapping model is created prior to the medical procedure by: retrieving a 3D model of the target organ; generating a plurality of virtual 3D probe poses in connection with an ultrasound probe; creating a virtual ultrasound image with respect to each of the plurality of virtual 3D probe poses; obtaining an ASM-pose pairing for each of the plurality of virtual 3D probe poses, where the ASM-pose pairing includes a virtual 3D probe pose and an ASM/label representation for the virtual ultrasound image created with respect to the virtual 3D probe pose; and establishing the ASM-pose mapping model for mapping an ASM/label representation to a 3D probe pose based on the ASM-pose pairings (“FIG. 1 is a sequence of steps 30 according to an embodiment of the invention, for determining a probe pose corresponding with an ultrasound image by registering the ultrasound image with volumetric scan data. At step 31, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 32, a feature vector is extracted from each of the simulated ultrasound images, and from the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels. At step 33, the feature vector from each simulated ultrasound image is compared with the feature vector from the ultrasound image to determine a distance or similarity value. At step 34, a candidate image is selected as the best match, based on the distance or similarity. At step 35, the pose of the probe is identified from the candidate image” [0054-0059]). Regarding Claim 10, Ramalhinho discloses the step of establishing comprises: generating training data based on the ASM-pose pairings; obtaining, via machine learning based on the training data, the ASM-pose mapping model to learn relationships between ASM/label representations and 3D probe poses. (“The feature vector may be extracted using a convolutional neural network. The convolutional neural network may have been trained to distinguish between ultrasound images.” [0029-0030]). Regarding Claim 11, A machine-readable and non-transitory medium having information recorded thereon, wherein the information, when read by the machine, (“A computer implemented method is disclosed for identifying a pose of a probe by registering an ultrasound image from with volumetric scan data.” [Abstract], “there is provided a non-transient machine readable medium comprising instructions for configuring a processor to perform the method of the first aspect, including any of the optional features thereof.” [0035]) causes the machine to perform the following steps:: receiving an ultrasound image acquired by an ultrasound probe deployed at a three- dimensional (3D) probe pose during a medical procedure involving a target organ (“The ultrasound probe 24 may be a laparoscopic ultrasound probe that is configured to obtain ultrasound data for generating an ultrasound image of an organ during a laparoscopic surgical procedure.” [0066]); detecting each anatomical structure from the ultrasound image and an anatomical structure mask (ASM) thereof (“The processor 26 may be configured to receive the ultrasound data from the probe 24, and determine an ultrasound image therefrom.” [0066], “At step 32, a feature vector is extracted from… the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.” [0056]); if a sequence of previous 3D probe poses exists, representing prior 3D probe poses of the ultrasound probe, predicting the 3D probe pose based on the sequence of previous 3D probe poses (“FIG. 2 illustrates a sequence of steps 40, according to an embodiment of the invention, for determining a sequence of probe poses corresponding with a sequence of ultrasound images obtained by sweeping the probe over tissue, such as an organ, by registering the sequence of ultrasound images with volumetric scan data. At step 41, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 42, a feature vector is extracted from each of the simulated ultrasound images, and from each of the sequence of ultrasound images. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.” [0060-0062]); creating a virtual ultrasound image based on the predicted 3D probe pose, obtaining an ASM representation of the virtual ultrasound image based on at least one ASM pair obtained from the virtual ultrasound image (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]); and if the sequence of previous 3D probe poses does not exist, obtaining an ASM representation based on at least one ASM pair obtained from the ultrasound image, and estimating the 3D probe pose based on the ASM representation of the ultrasound image via an ASM-pose mapping model that maps the ASM representation to a 3D probe pose (“FIG. 1 is a sequence of steps 30 according to an embodiment of the invention, for determining a probe pose corresponding with an ultrasound image by registering the ultrasound image with volumetric scan data. At step 31, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 32, a feature vector is extracted from each of the simulated ultrasound images, and from the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels. At step 33, the feature vector from each simulated ultrasound image is compared with the feature vector from the ultrasound image to determine a distance or similarity value. At step 34, a candidate image is selected as the best match, based on the distance or similarity. At step 35, the pose of the probe is identified from the candidate image” [0054-0059]). Ramalhinho does not specifically disclose estimating a label for each ASM to generate an ASM/label pair. However, in a similar field of endeavor, Yan teaches a system for automatic zone visualization employing an ultrasound probe (and an ultrasound imaging workstation [Abstract]. Yan also teaches estimating a label for each ASM to generate an ASM/label pair (“A stage S54 of flowchart 50 encompasses workstation 30 displaying a zone visualization in real time. Specifically, once the zones are mapped to ultrasound image stream 33, procedurally-defined zone(s) can be visualized over an ultrasound image 33 z when being intersected. The intersected zone(s) are highlighted with a zone label displayed. In addition, different visualized zones may be differentiated with color coding, text labels, or audio feedback. For example, while a set of zones are being intersected by a ultrasound image 33 z, the intersection areas are shown in each corresponding color or label with or without audio feedback.” [0028]) It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Ramalhinho as outlined above with estimating a label for each ASM to generate an ASMIlabel pair as taught by Yan, because it can facilitate an ultrasound-guided visualization of the anatomical structure [0006]. Regarding Claim 12, Ramalhinho discloses the step of predicting comprises: generating a first trajectory of 3D coordinates of the prior 3D probe poses in the sequence (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]); generating a second trajectory of 3D orientations of the prior 3D probe poses in the sequence; generating the predicted 3D probe pose based on the first and the second trajectories (“The mean number of plausible paths 2 for each of the nine sweep registrations vs the number of images is shown in FIG. 8. Since the Viterbi algorithm is recursive on the number of columns in the hidden Markov model (shown in FIG. 6), results are displayed as a function of the number of images used so far in the optimisation (from 2 to 20). FIG. 8 therefore shows the number of kinematically possible paths for N images (i.e. with a non-zero probability, based on the constraints defined above). The number of plausible trajectories found by the algorithm converges to 1 if enough images are used (N=17 in this case).” [0085], multiple trajectories are predicted as being the potentially correct trajectory as more images are acquired the projected trajectory narrows to the closest matching trajectory), wherein the virtual ultrasound image is created based on a 3D model for the target organ in accordance with the predicted 3D probe pose (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065]). Regarding Claim 13, Ramalhinho discloses the step of obtaining the at least one ASM from a virtual ultrasound image comprises: identifying each 2D structure in the virtual ultrasound image corresponding to a 3D anatomical structure in the 3D model; generating a mask for the 2D structure to create a corresponding ASM(“FIG. 1 is a sequence of steps 30 according to an embodiment of the invention, for determining a probe pose corresponding with an ultrasound image by registering the ultrasound image with volumetric scan data. At step 31, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 32, a feature vector is extracted from each of the simulated ultrasound images, and from the ultrasound image. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels. At step 33, the feature vector from each simulated ultrasound image is compared with the feature vector from the ultrasound image to determine a distance or similarity value.” [0054-0057]) ; Ramalhinho does not specifically disclose assigning a label for the 3D anatomical structure retrieved from the 3D model to the ASM to generate the ASM/label pair. However, in a similar field of endeavor, Yan teaches assigning a label for the 3D anatomical structure retrieved from the 3D model to the ASM to generate the ASM/label pair (“A stage S54 of flowchart 50 encompasses workstation 30 displaying a zone visualization in real time. Specifically, once the zones are mapped to ultrasound image stream 33, procedurally-defined zone(s) can be visualized over an ultrasound image 33 z when being intersected. The intersected zone(s) are highlighted with a zone label displayed. In addition, different visualized zones may be differentiated with color coding, text labels, or audio feedback. For example, while a set of zones are being intersected by a ultrasound image 33 z, the intersection areas are shown in each corresponding color or label with or without audio feedback.” [0028]) It would have been obvious to an ordinary skilled person in the art before the effective filing date of the claimed invention to modify the system of Ramalhinho as outlined above with assigning a label for the 3D anatomical structure retrieved from the 3D model to the ASM to generate the ASM/label pair as taught by Yan, because it can facilitate an ultrasound-guided visualization of the anatomical structure [0006]. Regarding Claim 14 as interpreted in view of the 112b rejection above, Ramalhinho discloses the information, when read by the machine, further causes the machine to perform the step of updating the ASM/label representation based on the 3D probe pose to generate an updated ASM/label representation for the ultrasound image, wherein the updating comprises: creating a new virtual ultrasound image based on the 3D probe pose in accordance with the 3D model (“FIG. 2 illustrates a sequence of steps 40, according to an embodiment of the invention, for determining a sequence of probe poses corresponding with a sequence of ultrasound images obtained by sweeping the probe over tissue, such as an organ, by registering the sequence of ultrasound images with volumetric scan data. At step 41, the volumetric scan data is processed to determine a plurality of simulated ultrasound images corresponding with different poses of the probe (e.g. at least one of position, orientation, depth/deformation). At step 42, a feature vector is extracted from each of the simulated ultrasound images, and from each of the sequence of ultrasound images. The feature vector may comprise a position and size of each vessel intersection with the respective image. The feature vector may be obtained by segmentation of the images into vessels and not-vessels.” [0060-0062]); extracting one or more virtual ASM/label pairs from the new virtual ultrasound image; for each ASM/label pair generated based on the ultrasound image, Identifying a corresponding virtual ASM/label pair, revising the ASM/label pair from the ultrasound image if it satisfies at least one predetermined criterion with respect to the virtual ASM/label pair; and generating an update ASM/label representation based on the revised ASM/label pair (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065], “In the example described herein, it is implicit that the organ does not deform. In some embodiments, the set of simulated ultrasound images obtained from the volumetric scan may be parameterised to include deformation (e.g. in the y direction). In some embodiments the depth d parameter may represent deformation of the organ in a direction normal to the surface of the organ (rather than a simple translation without deformation). Higher accuracies may be achievable with parameterisation including deformation.” [0096]). Regarding Claim 15, Ramalhinho discloses the step of revising according to at least one predetermined criterion comprises: if an overlap between the ASM in the ASM/label pair and the ASM in the virtual ASM/label pair is not acceptable, removing the ASM/label pair; if the overlap between the ASM in the ASM/label pair and the ASM in the virtual ASM/label pair is acceptable, replacing the label from the ASM/label pair with the label from the virtual ASM/label pair pair (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that best match each of the sequence of ultrasound images, based on the distance or similarity. At step 45, a probe path is identified by determining from the candidate images which is most likely to match each ultrasound image in the sequence of ultrasound images using a transition probability between two candidate images. The transition probability may be based on kinematic assumptions about the movement of the probe over time. A hidden Markov model may be used to determine the simulated images that are most likely to correspond with the sequence of ultrasound images.” [0063-0065], “The mean number of plausible paths 2 for each of the nine sweep registrations vs the number of images is shown in FIG. 8.” [0085], “In the example described herein, it is implicit that the organ does not deform. In some embodiments, the set of simulated ultrasound images obtained from the volumetric scan may be parameterised to include deformation (e.g. in the y direction). In some embodiments the depth d parameter may represent deformation of the organ in a direction normal to the surface of the organ (rather than a simple translation without deformation). Higher accuracies may be achievable with parameterisation including deformation.” [0096], as shown in Fig.8 the more ultrasound images analyzed the less possibilities for the potential number of paths (3D probe poses)). PNG media_image1.png 345 417 media_image1.png Greyscale Regarding Claim 16 in view of the U.S.C. 112b rejection above, Ramalhinho discloses the information, when read by the machine, further causes the machine to perform the step of updating the 3D probe pose to generate an updated 3D probe pose based on the updated ASM/label representation, wherein the step of updating the 3D probe pose comprises: obtaining a new 3D probe pose based on the updated ASM/label representation via the ASM-pose mapping model; if the new 3D probe pose and the 3D probe pose satisfy a configured condition, outputting the new 3D probe pose as the updated 3D probe pose; and if the new 3D probe pose and the 3D probe pose do not satisfy the configured condition, outputting the 3D probe pose as the updated 3D probe pose (“At step 43, the feature vector from each simulated ultrasound image is compared with the feature vector from each of the sequence of ultrasound images to determine a distance or similarity value. At step 44, candidate simulated images are selected that b
Read full office action

Prosecution Timeline

Dec 15, 2023
Application Filed
Dec 10, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12551289
Tracker-Based Surgical Navigation
2y 5m to grant Granted Feb 17, 2026
Patent 12496034
SYSTEMS AND METHODS FOR PATIENT MONITORING
2y 5m to grant Granted Dec 16, 2025
Patent 12484796
SYSTEM AND METHOD FOR MEASURING PULSE WAVE VELOCITY
2y 5m to grant Granted Dec 02, 2025
Patent 12350095
DIAGNOSTIC IMAGING CATHETER AND DIAGNOSTIC IMAGING APPARATUS
2y 5m to grant Granted Jul 08, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
30%
Grant Probability
84%
With Interview (+54.2%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 20 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month