Prosecution Insights
Last updated: April 19, 2026
Application No. 18/067,691

Augmenting images with positional information

Final Rejection §103
Filed
Dec 16, 2022
Examiner
PEDAPATI, CHANDHANA
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Auris Health, Inc.
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
14 granted / 22 resolved
+1.6% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants Limitations appearing inside of {} are intended to indicate the limitations not taught by said prior art(s)/combinations. Response to Amendment Applicant amendment filed on 12/18/2025 has been entered. Amended claims include claims 1, 17, 33, 49. Newly added claim includes claim 51. Claim 48 is newly canceled. Claims 2-8, 11-12, 27-28, 34-45 were previously canceled. Claims 9-10, 13-16, 25-26, 5932, 46-47, 50 were previously presented. No new matter has been introduced. Claims 1, 9-1, 13-17, 25-26, 29-33, 46-47, and 49-51 are pending in the application. Response to Arguments Applicant’s arguments, see Remarks, filed 08/19/2025, with respect to the rejections of claims 1, 9-10, 13-17, 25-26, 29-33, 46-47 and 49-51 under Fuimaono (US 20180360342A1) in view of Ye (US 20210196398 A1) have been fully considered and are persuasive. Therefore, the rejection under 35 USC §103 has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Fuimaono et al. (US 20180360342 A1) in view of Boddington et al. (US 20220265233 A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 9, 13, 17, 25, 29, 33, 49, and 51 are rejected under 35 U.S.C. 103 as being unpatentable over Fuimaono, US 20180360342 A1, in view of Boddington et al., US 20220265233 A1, hereinafter Boddington. Regarding claim 1, a method for augmenting images with positional information, the method comprising: obtaining, from an imaging device, a two-dimensional 2D image of an anatomy associated with a 2Dcoordinate frame(Fuimaono, Fig 6 and ¶[0053]; operation 102, the 2-D fluoroscopic image data is provided to the image processor; which may include assigning a coordinate system to the fluoroscopic image); identifying, {using a neural network}, a segment of the anatomy in the 2D image (Fuimaono, ¶[0073] and Fig 12A exhibit segmentation module 352; Fig 13A and ¶[0082]; 2-D segmentation); obtaining, from a location sensor associated with an instrument disposed within the anatomy, first location sensor data indicating one or more first poses of the instrument in (Fuimaono, ¶[0067] magnetic position tracking of catheter; ¶[0068]; catheter is moved along the wall of the anatomical structure to record location points to generate 3D anatomical geometry; ¶[0070]; position sub-system 309 provides at least 3-D mapping data); determining a transform between the 3D coordinate frame and the 2D coordinate frame {that maximizes an alignment between the one or more first poses of the instrument in the 3D coordinate frame with the segment of the anatomy in the 2D image}, the transform mapping a pose of the instrument in the 3D coordinate frame to the 2D coordinate frame (Fuimaono ¶[0053]; 3D-2D image converter 43 to convert the 3-D image data into 2-D space to be compatible with the 2D fluoroscopic image (FIG. 7A); where ¶[0073] and Fig 12A-12B; the 3-D mapping data from the position sensor is integrated into the 3D image data); obtaining, from the location sensor, second location sensor data indicating a second pose of the instrument in the 3D coordinate frame (Fuimaono, ¶[0084]; during the catheter mapping, increasingly more surface points are added to the mapping data in the course of time); and mapping the second pose of the instrument to the 2D image based on the transform so that the 2D image indicates the second pose of the instrument in relation to the segment of the anatomy (Fuimaono, ¶[0072]; The ACL technology is responsive to movement of the electrodes of the catheters and therefore updates the image of the electrodes in real time to provide a dynamic visualization of the catheters and their electrodes correctly positioned, sized, and oriented to the displayed map area). While Fuimaono teaches identifying a segment of the anatomy in the 2D image, but does not explicitly teach this limitation using a neural network. And Fuimaono teaches and determining a transform between the 3D coordinate frame and the 2D coordinate frame, but does not explicitly disclose a transform that maximizes an alignment between the one or more first poses of the instrument in the 3D coordinate frame with the segment of the anatomy in the 2D image. However, Boddington teaches identifying, using a neural network, a segment of the anatomy in the 2D image (the relevant structures are identified using the segmentation machine learning algorithm; Boddington, ¶[0154]); and determining a transform between the 3D coordinate frame and the 2D coordinate frame that maximizes an alignment between the one or more first poses of the instrument in the 3D coordinate frame with the segment of the anatomy in the 2D image, the transform mapping a pose of the instrument in the 3D coordinate frame to the 2D coordinate frame (Boddington, ¶[0141]; The best fit transformation can be computed using a variety of established methods, including gradient descent on mutual information, cross-correlation, or the identification of corresponding specific anatomical structures in preoperative and intraoperative images; and ¶[0146]; 3D to 2D registration (or fitting) of a statistical shape model of the application anatomy to the 2D). Fuimaono and Boddington are analogous art because they are from the same field of endeavor artificial intelligence intraoperative surgical guidance system. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include segmenting anatomy by a neural network as taught by Boddington to the invention of Fuimaono. The motivation to do so would be because the machine learning algorithm associated with this dataset analyzes, measures, and calculates the relevant variables and has the capability to identify suboptimal outputs and provide the user with situational awareness and hazard alerts leading to complication avoidance and error prevention. Regarding claim 9, the combination of Fuimaono and Boddington teach the method of claim 1. Fuimaono further teaches wherein the segment of the anatomy is a kidney, and wherein determining the transform includes: generating a kidney map based on the first location sensor data (Fuimaono ¶[0071]; the position sub-system 309 provides at least 3-D mapping data of a mapped anatomical region by which the anatomical region can be reconstructed in 3-D); and registering the kidney map with the 2D image ¶[0063]; Operation 208, the image processor displays a composite image where the first (i.e. 2D fluoroscopic image data; ¶[0050]) and second images (i.e., 3D image data includes kidney map; ¶[0073]) are registered). Regarding amended claim 13, the combination of Fuimaono and Boddington teach the method of claim 1. Fuimaono further teaches wherein determining the transform includes: generating a 3Drepresentation of the 2D image based in part on inserting the 2D image as a plane in an artificial 3D space (Fuimaono, ¶[0053]; the converter 43 can reconstruct the 3-D image data, originally acquired in an axial plane (FIG. 7B), in a coronal plane by creating the 3-D image data in 3-D space (FIG. 7C)). Regarding claim 49, the combination of Fuimaono and Boddington teaches the method of claim1. Fuimaono further teaches further comprising: mapping the one or more first poses of the instrument to historical indicators on the 2D image based on the transform so that each of the historical indicators represents a corresponding previous position and/or orientation of the instrument with respect to the segment of the anatomy; and displaying an instrument indicator on the 2D image simultaneously with displaying the one or more historical indicators on the 2D image, the instrument indicator and the one or more historical indicators together representing a path of the instrument (Fuimaono, ¶[0073]; display monitor 34 to display generate a composite 3-D image including anatomical geometries from both the 3-D mapping data and the 3-D tomographic image data (i.e., methods of X-ray computer tomography, of magnetic resonance tomography or of 2D or 3D ultrasonic imaging can be used, ¶[0018]), including anatomical geometries not present or visible in the 3-D mapping data. By utilizing the aforementioned hybrid technology, the image processor 350 also incorporates movement of the electrodes of the catheters and therefore updates the image of the electrodes in real time to provide a dynamic visualization of the catheters and their electrodes correctly positioned, sized, and oriented to the displayed anatomical region). Claim 17 is the system claim analogous to the method claim 1, and is similarly analyzed. Claim 33 is the non-transitory CRM claim analogous to the method claim 1, and is similarly analyzed. Claim 25 and Claim 29 are system claims corresponding to method claims 9 and 13, respectively, and are similarly analyzed. Regarding claim 51, the combination of Fuimaono and Boddington teaches the method of claim 1. Boddington further teaches wherein the alignment is maximized using a gradient descent algorithm (Boddington, ¶[0141]; The best fit transformation can be computed using a variety of established methods, including gradient descent.) Claims 14-16, 30-32, and 46-47 are rejected under 35 U.S.C. 103 as being unpatentable over Fuimaono in view of Boddington, and further in view of Florent et al., US20180353240 A1, previously cited, hereinafter Florent. Regarding claim 14, the combination of Fuimaono and Boddington discloses the method according to claim 1. The combination does not explicitly teach further comprising: obtaining a non-contrasted image of the anatomy depicting the instrument disposed therein (Fuimaono teaches that contrast is “preferable” (¶[0076]), implying that a non-contrasted image is obtained, however, obtaining a non-contrasted image is not explicitly disclosed); determining a shape of the instrument in the non-contrasted image; and mapping the identified segment of the anatomy to the non-contrasted image based at least in part on the shape of the instrument. However, Florent discloses further comprising: obtaining a non-contrasted image of the anatomy depicting the instrument disposed therein (Florent ¶[0056] interventional image data set providing unit 2 can be adapted to provide the interventional image data set such that it comprises first interventional images showing the fenestrated stent (i.e., instrument disposed therein) without a contrast agent); determining a shape of the instrument in the non-contrasted image (Florent [0013], the position providing unit may also be adapted to provide the shape of this [interventional instrument]); and mapping the identified segment of the anatomy (Florent ¶[0022]; , a segmentation of at least a part of the vessel for determining its position) to the non-contrasted image based at least in part on the shape of the instrument (Florent, [0030] providing an interventional image data set showing an implanted object with an opening and a vessel with an opening by an interventional image data set providing unit, [0031] providing the position of the interventional instrument by a position providing unit in a frame of reference. The position providing unit determines shape and position of interventional instrument, where position is based on the shape, as stated in ¶[0052] determining the three-dimensional shape of the interventional instrument 10 and three-dimensional position of this shape.) Fuimaono and Florent are analogous are because they are from the same field of endeavor of defining a position and shape of an interventional instrument in medical imaging during an invasive procedure. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include depicting the instrument on a non-contrasted image and determining the shape of the instrument as taught by Florent to the combined invention of Fuimaono and Boddington. The motivation to do so would be because the instrument may appear clearly in a non-contrast image, and reduces exposure of kidneys to undesirable to contrast agents, and to provide navigational assistance by displaying position of the instrument. Regarding amended claim 15, the combination of Fuimaono, Boddington and Florent disclose the method of claim 14. Fuimaono further discloses, further comprising: obtaining, from the location sensor, third location sensor data indicating a third pose of the instrument in the 3D coordinate frame, the shape of the instrument being determined based at least in part on the third pose of the instrument (Fuimaono, ¶[0084]; during the catheter mapping, increasingly more surface points are added to the mapping data in the course of time). Regarding amended claim 16, the combination of Fuimaono, Boddington and Florent disclose the method of claim 14. Florent further discloses further comprising identifying a segment of the instrument in the non-contrasted image the shape instrument being determined based at least in part on the identified segment of the instrument in the non-contrasted image (Florent, ¶[0071]; the catheter can also directly be detected in the respective image by using corresponding segmentation algorithms. As mentioned previously, ¶[0056] the image maybe without a contrast agent). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include identifying a segment and determining the instrument shape based on the segment as taught by Florent to the combined invention of Fuimaono and Boddington. The motivation to do so would be to assist in determining the instrument used as well as providing position for generating display of the instrument within the anatomy. Claim 30 and Claim 46 are the system and non-transitory CRM claims, respectively, analogous to method claim 14, and are similarly analyzed. Claim 31 and Claim 47 are the system and non-transitory CRM claims, respectively, analogous to method claim 15, and are similarly analyzed. Claim 32 is the system claim analogous to method claim 16, and is similarly analyzed. Claims 10 and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Fuimaono in view of Boddington, and in further view of Walker, et. al., US 20210068911 A1, as cited in the IDS (filed on 12/16/2022), hereinafter Walker. Regarding claim 10, the combination of Fuimaono and Boddington disclose the method of claim 1. Boddington further teaches wherein determining the transform includes: determining an angle between the {imaging} device and the anatomy; and aligning the 3D coordinate frame with the 2D coordinate frame based on the determined angle (Numerous equations and formulas are used within the algorithms to calculate: measurements, differences, angles, grid and implant positions, fracture deviations to determine at least one measurement of surgical variables involving the implant or trauma; Boddington, ¶[0134]; registering the cup (i.e., device) grid 1005 to the anatomical structures 1000; ¶[0185], See Fig 23A-C, Fig 23C shown below). PNG media_image1.png 312 405 media_image1.png Greyscale PNG media_image2.png 638 450 media_image2.png Greyscale Boddington teaches determining an angle between a device/surgical variable and anatomy, but does not explicitly disclose determining an angle between the imaging device and the anatomy. However, Walker teaches wherein determining the transform includes: determining an angle between the imaging device and the anatomy (Walker, ¶[0068]; the 2-D position of the probe is designated by the user in the fluoroscopy field of view (FOV) in images (i.e., 2D coordinates) obtained at two different C-arm roll angles); and aligning the 3D coordinate frame with the 2D coordinate frame based on the determined angle (Walker, ¶[0068]; EM coordinate system (i.e., 3D coordinates) is registered to the fluoroscopy coordinate system FF (i.e., 2D coordinates)). Fuimaono and Walker are analogous art because they are from the same field of endeavor of tracking and/or controlling the movement, location, position, orientation, or shape of one or more parts of a flexible medical instrument disposed within an anatomical structure during interventional procedures with another medical imaging modality. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include using the angle between the anatomy and imaging device data to align the 3D coordinates of the positional sensor with the 2D coordinate of the medical image, as taught by Walker to the combined invention of Fuimaono and Boddington. The motivation to do so would be to sync the sensor location measurements with the selected fluoroscopy locations so that a user may manipulate one of the objects relative to the other objects. Claims 26 is the system claim analogous to method claim 10, and is similarly analyzed. Claim 50 is rejected under 35 U.S.C. 103 as being unpatentable over Fuimaono in view of Boddington, and further in view of Ahmed et al., US 20230114385 A1, hereinafter Ahmed. Regarding claim 50, the combination of Fuimaono and Boddington teaches the method of claim 1. The combination does not explicitly disclose wherein the neural network is a convolutional neural network with a U-net architecture. However, Ahmed discloses wherein the neural network is a convolutional neural network with a U-net architecture (Ahmed, ¶[0051]; transformed images undergo deformable registration and are applied to a U-net convolutional network for segmentation of images). Fuimaono and Ahmed are analogous art because they are from the same field of endeavor of real-time tracking and visualizing targeted internal organs during pre-operative planning and intraoperative stages of surgical operation. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a segmentation using a U-net convolutional network as taught by Ahmed to the combined invention of Fuimaono and Boddington. The motivation to do so would be because the U-net convolutional network has been used for biomedical image segmentation. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Hemingway et al., US 20230139458 A1, teaches registration of image volumes, and would have been relied upon for teaching gradient descent algorithms for transforming coordinate space. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANDHANA PEDAPATI whose telephone number is (571)272-5325. The examiner can normally be reached M-F 8:30am-6pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on 5712727409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANDHANA PEDAPATI/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Dec 16, 2022
Application Filed
Apr 03, 2025
Non-Final Rejection — §103
Jun 05, 2025
Response Filed
Jun 23, 2025
Final Rejection — §103
Aug 13, 2025
Applicant Interview (Telephonic)
Aug 13, 2025
Examiner Interview Summary
Aug 19, 2025
Response after Non-Final Action
Sep 17, 2025
Request for Continued Examination
Sep 19, 2025
Response after Non-Final Action
Sep 23, 2025
Non-Final Rejection — §103
Dec 18, 2025
Response Filed
Feb 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602896
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597095
INTELLIGENT SYSTEM AND METHOD OF ENHANCING IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12571683
ELEVATED TEMPERATURE SCREENING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548180
HOLE DIAMETER MEASURING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541829
MOTION-BASED PIXEL PROPAGATION FOR VIDEO INPAINTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.5%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month