Prosecution Insights
Last updated: April 19, 2026
Application No. 18/115,660

METHOD AND SYSTEM FOR REGISTERING IMAGES ACQUIRED WITH DIFFERENT MODALITIES FOR GENERATING FUSION IMAGES FROM REGISTERED IMAGES ACQUIRED WITH DIFFERENT MODALITIES

Non-Final OA §103§112
Filed
Feb 28, 2023
Examiner
HUNTSINGER, PETER K
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Esaote S P A
OA Round
3 (Non-Final)
28%
Grant Probability
At Risk
3-4
OA Rounds
4y 11m
To Grant
45%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
90 granted / 322 resolved
-34.0% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
4y 11m
Avg Prosecution
59 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
9.3%
-30.7% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
19.4%
-20.6% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 322 resolved cases

Office Action

§103 §112
DETAILED ACTION Claims 1-12 are currently pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/5/26 has been entered. Response to Arguments Applicant's arguments filed 12/3/25 have been fully considered but they are not persuasive. The Applicant argues on page 7 of the response in essence that: Guracar fails to disclose or suggest a tracker-less method as recited in claim 1. Guracar does in fact use an additional sensor for tracking the ultrasound probe as described in Guracar paras. [0031]-[0034], and this sensor and/or related position sensing is used in act 38 as described in Guracur para. [0042], is used during the scanning in act 42 described in Guracar paras. [0046]-[0047], is compensated for in the spatial transform and the coordinate transformation matrix as described in Guracur para. [0050] and used in the spatial transform as described in Guracar para. [0051]. The Office Action attempts to rely on Guracar para. [0053] to allegedly teach two of the steps of the method recited in claim 1; however, the act 46 described in Guracar para. [0053] employs ultrasound data from the scan of act 40 and the scan data obtained in act 32. As described in Guracar para. [0046], the act 42 detects position of the transducer used in act 40 described in Guracar para. [0043]. Accordingly, Guracar para. [0053] employs a position sensor and its associated position sensing data and therefore Guracar cannot be relied on to teach or suggest the tracker-less method recited in claim 1. Guracar uses a position sensor to register the wide field-of-view ultrasound image with the narrow field-of-view image (paragraph 34). Guracar states that where the transducer position sensor and the imaging system for the non-ultrasound modality are not registered, the portion represented by data of both modes is not known. The relative translation, rotation, and/or scale are not known (paragraph 35). Therefore, the position sensor is not used in registering image data obtained from the high-depth and large FOV ultrasound scan of the anatomical region with image data of the anatomical region acquired with a different modality. Guracar teaches that landmark detection is used to register the ultrasound image and image of different modality (paragraph 37), because the positioning between images is unknown. The Applicant argues on page 8 of the response in essence that: The Office Action alleges that Weber, para. [0091] teaches "Acquiring a sequence of ultrasound images of an anatomical region from only a single probe" as recited in claim 1, but Weber, para. [0091] merely teaches a probe 230 that is configured to acquire ultrasound imaging data 211 at a first resolution and second ultrasound imaging data 212 at a second, lower resolution. Such data 211 and 212 are acquired subsequently, and not together in an interleaved mode. Weber does not disclose any interlacing mode. Weber discloses the first ultrasound imaging data 211 is acquired by transmitting the first plurality of transmit pulses 315 to the region of interest 310 and the second ultrasound imaging data 212 is acquired by transmitting the second plurality of transmit pulses 325 to the extended field-of-view 320 (paragraph 119-120). The pulse sequence is interlaced because it includes pulses 315 and 325. The Applicant argues on page 9 of the response in essence that: Guracar specifically seeks to resolve differences in position sensing between two transducers as stated in the Abstract and at least on paras. [0003] and [0013] of Guracar. The proposed modification of Guracar to only use one probe as stated in the Office Action to allegedly teach claims 1 and 7 would improperly render Guracar unsatisfactory for its intended purpose; therefore, there is no suggestion or motivation to make the proposed modification. The intended purpose of Guracar is to generate a multi-modality image (paragraph 1). Obtaining the FOV and zoomed ultrasound images with one probe as taught by Weber would merely simplify the step of obtaining ultrasound images and not prevent the proposed combination from producing a multi-modality image. The Applicant argues on page 9 of the response in essence that: Further, Guracar cannot be combined with Weber as proposed because that would bring interlacing of three types of images: that is, wide, high depth ultrasound scans with the other modality scans according to Guracar para. [0016] ("acts 32 and 34 are performed in an interleaved manner"; and wide, high depth ultrasound scans (low resolution) with zoomed ultrasound scans (high resolution) according to what the Examiner states in the last paragraph of the Advisory Action regarding Weber (i.e., "Weber discloses the first ultrasound imaging data 211 is acquired by transmitting the first plurality of transmit pulses 315 to the region of interest 310 and the second ultrasound imaging data 212 is acquired by transmitting the second plurality of transmit pulses 325 to the extended field-of-view 320 (paragraph 119- 120). The pulse sequence is interlaced because it includes pulses 315 and 325."). The result of such a proposed combination would be interlacing three images, which makes no sense as the frame rate would be unacceptable. Guracar discloses that the acts are performed in the order shown or other orders. For example, acts 32 and 34 are performed in an interleaved manner, sequentially in either order, or at a same time (paragraph 16). Performing steps 32 and 34 sequentially in either order or at a same time would not result in interlacing three images. The Applicant argues on pages 9 and 10 of the response in essence that: Further, Applicant disagrees with the Examiner's statement about Weber in the last paragraph of the Advisory Action that "[t]he pulse sequence is interlaced because it includes pulses 315 and 325." The fact that the pulses may be contiguous does not necessarily mean that they are fired in an interlaced manner such that the corresponding images are collected in an interleaved manner. Also, the first ultrasound imaging data 211 acquired at a first resolution and the second ultrasound imaging data 212 acquired at a second, lower resolution in Weber are used for the completely different aim of segmenting the anatomic structure (see Weber, Abstract), and such activity is commonly performed off-line. As explained in para. [0016] of the application, image interleaving is used for doubling the frame rate, which is a quantity that makes sense only in real-time processing. Thus, Weber does not teach interlacing as claimed. Claim 1 recites “Acquiring a sequence of ultrasound images by an anatomical region from only a single probe by interlacing wide, high depth and large field of view (FOV) ultrasound scan to a zoomed ultrasound scan”. Weber likewise discloses using interlacing to obtain a combined image of a region of interest and extended field of view (paragraph 136). While Applicant is arguing that interleaving in the claimed invention is used for doubling the frame rate, the Applicant’s specification states that the High-depth and large FOV image data are not processed for display on the display screen of the ultrasound system or of the image processing device or on a remote screen, but are transmitted to the registration data calculator 210 (paragraph 84). Therefore, interlacing performed by the claimed invention does not result in a doubled frame rate. In response to Applicant's argument that the ultrasound images in Weber are used for the completely different aim of segmenting the anatomic structure, the fact that the inventor has recognized another advantage which would flow naturally from following the suggestion of the prior art cannot be the basis for patentability when the differences would otherwise be obvious. See Ex parte Obiaya, 227 USPQ 58, 60 (Bd. Pat. App. & Inter. 1985). The Applicant argues on page 10 of the response in essence that: In Guracar, landmarks are used to calculate the spatial transform to be used for registering the wide, high depth ultrasound scans obtained with the first probe with the other modality scans. For doing so, a tracking sensor is used to determine position and orientation of the first probe (see Guracar para. [0031]) and the second probe (see Guracar para. [0047]). The position and orientation of the second probe is used to adjust the spatial transform as calculated for the first probe. The spatial transform (e.g., "registration data" per the present application) in Guracar therefore requires the presence of at least a position sensor. Thus, even if Guracar could be combined with Weber, a point that Applicant does not concede), the result of such a proposed modification would be a single probe with a position sensor to be used for calculating the spatial transform because the position sensor is necessary for the functioning of the system disclosed in Guracar. Such a proposed combination would not provide a tracker-less registration as recited in the independent claims 1 and 7. As explained above, the position sensor is used to register the wide field-of-view ultrasound image with the narrow field-of-view image, and not used in registering image data obtained from the high-depth and large FOV ultrasound scan of the anatomical region with image data of the anatomical region acquired with a different modality. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventors, at the time the application was filed, had possession of the claimed invention. Claim 1 recites “Registering image data obtained from the high-depth and large FOV ultrasound scan of the anatomical region with image data of the anatomical region acquired with a different modality and determining tracker-less registration data without using any probe tracker or position/orientation sensor”. Applicant’s specification discusses a method of avoiding the use of probe trackers for registration in paragraphs 8 and 9. While the Applicant’s specification generally implies that a probe tracker is not used, the specification fails to stat that a position/orientation sensor is not used. The mere absence of a positive recitation is not basis for an exclusion (See MPEP 2173.05(i)). Claim 7 contains a similar limitation. Therefore, because there is no express, implicit, or inherent disclosure of determining registration without a position/orientation sensor, claims 1-12 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2 and 7-12 are rejected under 35 U.S.C. 103 as being unpatentable over Guracar US Publication 2016/0331351 and Weber US Publication 2024/0404066 (hereafter “Weber”). Referring to claim 1, Guracar discloses tracker-less method for registering images acquired with different modalities for generating fusion images from registered images acquired with different modalities, the method comprising: - Acquiring a sequence of ultrasound images of an anatomical region from a probe by obtaining wide, high-depth and large field of view (FOV) ultrasound scan to a zoomed ultrasound scan (paragraph 13, The multimodality coordinate registration transformation acquired with one ultrasound transducer (e.g., wide field-of-view) is used to provide registration information enabling multimodality fusion with a different ultrasound transducer (e.g., narrow field-of-view)); - Registering image data obtained from the high-depth and large FOV ultrasound scan of the anatomical region with image data of the anatomical region acquired with a different modality and determining tracker-less registration data without using any probe tracker or position/orientation sensor (paragraph 18, In act 30, information acquired with one ultrasound transducer is registered with information from a non-ultrasound imaging modality for multi-modality imaging); - the image data obtained from the high-depth and large FOV ultrasound scan and/or the image data acquired with the different modality not being displayed to the user (paragraph 13, The multimodality coordinate registration transformation acquired with one ultrasound transducer (e.g., wide field-of-view) is used to provide registration information enabling multimodality fusion with a different ultrasound transducer (e.g., narrow field-of-view)) (paragraph 53, In act 46, a multi-modality image is generated. Any now known or later developed multi-modality imaging may be used. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user [the multi-modality image is displayed but not the images used to create the multi-modality image]); - Registering the image data acquired by the zoomed ultrasound scan with the zoomed image data obtained with the different modality by applying the tracker-less registration data to the image data acquired by the zoomed ultrasound scan (paragraph 35, In act 38, the ultrasound data is registered with the non-ultrasound data. The data from both modalities represents a part of the patient); - Combining and/or fusing the registered image data acquired by the zoomed ultrasound scan with the zoomed image data obtained with the different modality to generate combined or fused image data (paragraph 53, In act 46, a multi-modality image is generated. Any now known or later developed multi-modality imaging may be used. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user) and - Displaying the combined or fused image data acquired by the zoomed ultrasound scan with the zoomed image data obtained with the different modality in lieu of the image data obtained from the high-depth and large FOV ultrasound scan and/or the image data acquired with the different modality that was not displayed to the user (paragraph 53, In act 46, a multi-modality image is generated. Any now known or later developed multi-modality imaging may be used. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user). While Guracar discloses acquiring both wide and zoomed ultrasound scans, Guracar does not disclose expressly doing so with a single probe that interlaces the scans. Weber discloses - Acquiring a sequence of ultrasound images by an anatomical region from only a single probe (paragraph 91, The beamforming unit 210 is connected to an ultrasound transducer probe 230, and is configured to acquire, using the ultrasound transducer probe, first ultrasound imaging data 211 at a first resolution and second ultrasound imaging data 212 at a second, lower resolution) by interlacing wide, high depth and large field of view (FOV) ultrasound scan to a zoomed ultrasound scan (paragraph 119-120, According to the pulse sequence 300 in FIG. 3, the first ultrasound imaging data 211 is acquired by transmitting the first plurality of transmit pulses 315 to the region of interest 310. The first plurality of transmit pulses comprise narrow beams with a high line density. The second ultrasound imaging data 212 is acquired by transmitting the second plurality of transmit pulses 325 to the extended field-of-view 320. The second plurality of transmit pulses comprise wider transmit beams with a lower line density than the first plurality of transmit pulses 315, and are transmitted each side of the first plurality of transmit pulses to extend the field-of-view laterally). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to use a single probe that interlaces wide and zoomed ultrasound images. The motivation for doing so would have been to eliminate the need for multiple probes while maintaining a high frame rate. Therefore, it would have been obvious to combine Weber with Guracar to obtain the invention as specified in claim 1. Referring to claim 2, Guracar discloses wherein registration is carried out by means of registration algorithms comprising: - defining landmarks on images comprising an image acquired by the different modality and an image acquired by the high-depth and large FoV ultrasound scan (paragraph 37, By locating landmark features as points, lines, areas, and/or volumes in both sets of data, the spatial registration between the different types of data is determined); - defining a spatial reference system common to both of the said images (paragraph 37, The spatial transform to align the features in the two data spaces is calculated); - determining transfer functions of the image pixels of the image according to the different modality to the image pixels of the image acquired by the high- depth ultrasound scan based on different spatial positions of the said landmarks in the spatial reference system and in which the said transfer functions, also called registration data are applied to the image pixels obtained by the zoomed ultrasound scan for registering the image with the different modality image and which registered zoomed ultrasound image is combined with a correspondingly zoomed field of view of the image acquired by the different modality and only the combined image is displayed to the user (paragraph 40, A coordinate transformation matrix capturing the translation, orientation, and/or scale of the ultrasound data relative to the scan data of the other modality is determined). Referring to claim 7, Guracar discloses a system configured for registering images acquired with different modalities for generating fusion images from registered images acquired with different modalities, which system comprises: - an ultrasound imaging system (paragraph 15, The method is implemented by the system 10 of FIG. 4); - a registration data processor configured to store images acquired with a first imaging modality and images acquired by a probe of the ultrasound imaging system (paragraph 79, The processor 26 is configured to register scan data from one ultrasound transducer with scan data from another modality, such as magnetic resonance or computed tomography data); - the registration data processor being configured to calculate tracker-less registration data of the image acquired by the ultrasound system with the image acquired with the first modality without using any probe tracker or position/orientation sensor (paragraph 79, The processor 26 is configured to register using a coordinate transformation matrix created using a transducer with a larger field of view); - a zooming processor which sets the ultrasound imaging system for acquiring zoomed images by a probe (paragraph 43, In act 40, a different transducer with a same or different ultrasound imaging system is used to scan the patient. The scan of act 40 may have a shallower and/or narrower field of view than for act 34); - an image combination processor which once the registration data process applies the registration data to the zoomed ultrasound image, combines the zoomed ultrasound image with a corresponding zoomed field of view of the image acquired by the first modality (paragraph 53, In act 46, a multi-modality image is generated. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user); - a display for displaying the combined zoomed ultrasound image with the corresponding zoomed field of view of the image acquired by the first modality (paragraph 64, The display 28 is a monitor, LCD, projector, plasma display, CRT, printer, or other now known or later developed devise for outputting visual information); wherein the display is controlled to not display images acquired with the first imaging modality and/or images acquired by the probe of the ultrasound imaging system, and to instead display only the combined zoomed ultrasound image with the corresponding zoomed field of view of the image acquired by the first modality (paragraph 53, In act 46, a multi-modality image is generated. Any now known or later developed multi-modality imaging may be used. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user [the multi-modality image is displayed but not the images used to create the multi-modality image]). While Guracar discloses acquiring both wide and zoomed ultrasound scans, Guracar does not disclose expressly doing so with a single probe. Weber discloses - a registration data processor configured to store images acquired with a first imaging modality and images acquired by a probe of the ultrasound imaging system (paragraph 91, The beamforming unit 210 is connected to an ultrasound transducer probe 230, and is configured to acquire, using the ultrasound transducer probe, first ultrasound imaging data 211 at a first resolution and second ultrasound imaging data 212 at a second, lower resolution) (paragraph 119, According to the pulse sequence 300 in FIG. 3, the first ultrasound imaging data 211 is acquired by transmitting the first plurality of transmit pulses 315 to the region of interest 310. The first plurality of transmit pulses comprise narrow beams with a high line density); - a zooming processor which sets the ultrasound imaging system for acquiring zoomed images by the same probe (paragraph 120, The second ultrasound imaging data 212 is acquired by transmitting the second plurality of transmit pulses 325 to the extended field-of-view 320. The second plurality of transmit pulses comprise wider transmit beams with a lower line density than the first plurality of transmit pulses 315, and are transmitted each side of the first plurality of transmit pulses to extend the field-of-view laterally). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to use a single probe that obtains wide and zoomed ultrasound images. The motivation for doing so would have been to eliminate the need for multiple probes while maintaining a high frame rate. Therefore, it would have been obvious to combine Weber with Guracar to obtain the invention as specified in claim 7. Referring to claim 8, Guracar discloses wherein a controller of an ultrasound system is provided which is configured to drive the ultrasound system for carrying out a high-depth and large field of view imaging scan and a zoomed ultrasound scan, the controller feeding the image data acquired by said high-depth and large field of view imaging scans to the registration data processor for calculating the registration data with the image acquired by the first modality (paragraph 13, The multimodality coordinate registration transformation acquired with one ultrasound transducer (e.g., wide field-of-view) is used to provide registration information enabling multimodality fusion with a different ultrasound transducer (e.g., narrow field-of-view)) and the controller feeding the image data acquired by the zoomed ultrasound scan to the registration data processor for applying to it the registration data (paragraph 35, In act 38, the ultrasound data is registered with the non-ultrasound data. The data from both modalities represents a part of the patient); the controller providing the registered zoomed ultrasound image with the corresponding zoomed field of view of the image acquired by the first modality to the image combination processor and providing the combined image to the display (paragraph 53, In act 46, a multi-modality image is generated. Any now known or later developed multi-modality imaging may be used. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user). While Guracar discloses acquiring both wide and zoomed ultrasound scans, Guracar does not disclose expressly interlacing the scans. Weber discloses wherein an ultrasound system control unit is provided which is configured to drive the ultrasound system for carrying out in an interlaced manner (paragraph 119-120, According to the pulse sequence 300 in FIG. 3, the first ultrasound imaging data 211 is acquired by transmitting the first plurality of transmit pulses 315 to the region of interest 310. The first plurality of transmit pulses comprise narrow beams with a high line density. The second ultrasound imaging data 212 is acquired by transmitting the second plurality of transmit pulses 325 to the extended field-of-view 320. The second plurality of transmit pulses comprise wider transmit beams with a lower line density than the first plurality of transmit pulses 315, and are transmitted each side of the first plurality of transmit pulses to extend the field-of-view laterally). At the time of the effective filing date of the claimed invention, it would have obvious to a person of ordinary skill in the art to use a single probe that interlaces wide and zoomed ultrasound images. The motivation for doing so would have been to eliminate the need for multiple probes while maintaining a high frame rate. Therefore, it would have been obvious to combine Weber with Guracar to obtain the invention as specified in claim 8. Referring to claim 9, Guracar discloses wherein the registration data processor as well as the image combination processor can be in the form of a software coding the instructions for a controller of an ultrasound system to carry out the above disclosed functions (paragraph 70, The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone, or in combination). Referring to claim 10, Guracar discloses wherein said software is loaded and executed by the controller which is integrated or part of a central processing unit (CPU) of the ultrasound system (paragraph 79, The processor 26 is configured to register scan data from one ultrasound transducer with scan data from another modality, such as magnetic resonance or computed tomography data). Referring to claim 11, Guracar discloses wherein said software is loaded and executed by an external central processing unit (CPU) which is communicating with the controller of the ultrasound system and with the display of the ultrasound system (paragraph 66, The memory 12 is part of an imaging system (e.g., ultrasound system 14), part of a computer associated with the processor 26, part of a database, part of another system, or a standalone device) Referring to claim 12, Guracar discloses wherein part of said software is loaded and executed by the controller, which is integrated, or is part of a central processing unit (CPU) of the ultrasound system and part of the software is loaded and executed by said external CPU (paragraph 70, The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone, or in combination. Likewise, processing strategies may include multiprocessing, multitasking, parallel processing, and the like). Claim 3-6 are rejected under 35 U.S.C. 103 as being unpatentable over Guracar US Publication 2016/0331351 and Weber US Publication 2024/0404066 as applied to claim 1 above, and further in view of well known prior art. Referring to claim 3, Guracar discloses determining the registration data, but does not disclose doing so expressly using a GAN. Official Notice is taken that it is well known and obvious in the art to register image using a so-called generative algorithm as the so-called GAN (See MPEP 2144.03). The motivation for doing so would have been to utilize a standardized technique that improves the registration of images while reducing the involvement of user interaction. Therefore, it would have been obvious to combine well known prior art with Guracar to obtain the invention as specified in claim 3. Referring to claim 4, Guracar discloses determining the registration data, but does not disclose doing so expressly using one-shot machine learning registration of heterogeneous imaging modalities. Official Notice is taken that it is well known and obvious in the art to register image using a one-shot machine learning registration of heterogeneous imaging modalities (See MPEP 2144.03). The motivation for doing so would have been to utilize a standardized technique that improves the registration of images while reducing the involvement of user interaction. Therefore, it would have been obvious to combine well known prior art with Guracar to obtain the invention as specified in claim 4. Referring to claim 5, Guracar discloses mapping the image acquired by the different modality, such as for example MRI or CT, to a "synthetic" ultrasound image subsequent registration to a real ultrasound image (paragraph 53, In act 46, a multi-modality image is generated. Any now known or later developed multi-modality imaging may be used. The information from two different modalities, one of which is ultrasound, is fused for a combined presentation to the user), but does not disclose expressly using a Machine Learning algorithm. Official Notice is taken that it is well known and obvious in the art to use a machine learning algorithm to map images (See MPEP 2144.03). The motivation for doing so would have been to utilize a standardized technique that improves the registration of images while reducing the involvement of user interaction. Therefore, it would have been obvious to combine well known prior art with Guracar to obtain the invention as specified in claim 5. Referring to claim 6, Guracar discloses wherein a Machine Learning algorithm is used for carrying out a segmentation in the ultrasound images (paragraph 38, wherein a Machine Learning algorithm is used for carrying out an anatomical segmentation in the ultrasound images), but does not disclose expressly wherein a Machine Learning algorithm is used for carrying out an anatomical segmentation in the ultrasound images and the registration with previously segmented images acquired by the different modality. Official Notice is taken that it is well known and obvious in the art to use a Machine Learning algorithm is used for carrying out an anatomical segmentation (See MPEP 2144.03). The motivation for doing so would have been to utilize a standardized technique that improves the registration of images while reducing the involvement of user interaction. Therefore, it would have been obvious to combine well known prior art with Guracar to obtain the invention as specified in claim 6. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PETER K HUNTSINGER whose telephone number is (571)272-7435. The examiner can normally be reached Monday - Friday 8:30 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benny Q Tieu can be reached at 571-272-7490. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER K HUNTSINGER/ Primary Examiner, Art Unit 2682
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
Jun 09, 2025
Non-Final Rejection — §103, §112
Sep 11, 2025
Response Filed
Oct 01, 2025
Final Rejection — §103, §112
Dec 03, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Jan 22, 2026
Response after Non-Final Action
Feb 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12540884
Determining Fracture Roughness from a Core
2y 5m to grant Granted Feb 03, 2026
Patent 12412381
METHODS AND SYSTEMS FOR CONTROLLING OPERATION OF WIRELINE CABLE SPOOLING EQUIPMENT
2y 5m to grant Granted Sep 09, 2025
Patent 12387360
APPARATUS AND METHOD FOR ESTIMATING UNCERTAINTY OF IMAGE COORDINATE
2y 5m to grant Granted Aug 12, 2025
Patent 12388943
PRINTING SYSTEM USING FLUORESENT AND NON-FLUORESENT INK, PRINTING APPARATUS, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND CONTROL METHOD THEREOF
2y 5m to grant Granted Aug 12, 2025
Patent 12374081
DIGITAL IMAGE PROCESSING TECHNIQUES USING BOUNDING BOX PRECISION MODELS
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
28%
Grant Probability
45%
With Interview (+16.7%)
4y 11m
Median Time to Grant
High
PTA Risk
Based on 322 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month