Prosecution Insights
Last updated: April 19, 2026
Application No. 18/649,913

ALIGNING PRE-OPERATIVE SCAN IMAGES TO REAL-TIME OPERATIVE IMAGES FOR A MEDIATED-REALITY VIEW OF A SURGICAL SITE

Non-Final OA §DP
Filed
Apr 29, 2024
Examiner
SUN, HAI TAO
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Proprio Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
347 granted / 476 resolved
+10.9% vs TC avg
Strong +27% interview lift
Without
With
+26.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
511
Total Applications
across all art units

Statute-Specific Performance

§101
6.9%
-33.1% vs TC avg
§103
65.8%
+25.8% vs TC avg
§102
2.3%
-37.7% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 476 resolved cases

Office Action

§DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Election/Restrictions The applicant elects Group I, claims 21-30, without traverse. Non-elected claims 31-40 have been withdrawn. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 21-30 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-24 of U.S. Patent No. US 10912625 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because all the limitations in claim 21 is anticipated by claim 1 of the U.S. Patent No. US 10912625 B2. Application: 18649913 Claim 21 U.S. Patent No. US 10912625 B2 Claim 1 21. (Previously Presented) A method for generating a mediated reality view of a surgical site, the method comprising: 1. A method for generating a mediated reality view of a surgical site comprising: receiving a pre-operative image representing three-dimensional anatomy of a patient in a first position; receiving pre-operative images representing three-dimensional anatomy of a patient in a first position; receiving real-time images from a camera array after the patient is positioned for surgery in a second position; receiving real-time images from a camera array after the patient is positioned for surgery in a second position; identifying, based on the pre-operative image and the real-time images, a set of features visible in both the pre-operative image and the real-time images; identifying, based on the pre-operative images, coordinates in a three-dimensional pre-operative image space corresponding to locations of fiducial markers present on a patch applied to the patient; identifying, based on the real-time images, coordinates in a three-dimensional real-time image space corresponding to the locations of the fiducial markers present on the patch applied to the patient; utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images; performing an optimization algorithm to identify the transformation that minimizes a distance between transformed coordinates of a pattern of the fiducial markers in the pre-operative images with the coordinates of the pattern in the real-time images; applying the transformation to the pre-operative image to substantially align the pre- operative image with the real-time images; applying a transformation to the pre-operative images to substantially align the locations of the fiducial markers in the pre-operative images to the locations of the fiducial markers in the real-time images, wherein applying the transformation comprises: overlaying the transformed pre-operative image on the real-time images to generate the mediated reality view; and overlaying the transformed pre-operative images on the real-time images to generate the mediated reality view; and providing the mediated reality view to a display device for display. providing the mediated reality view to a display device for display. Claims 22-30 Claims 2-24 Claims 21-30 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 11376096 B2 in view of Chopra (US 20180092698 A1). Although the claims at issue are not identical, they are not patentably distinct from each other. Application: 18649913 Claim 21 U.S. Patent No. US 11376096 B2 Claim 1 21. (Previously Presented) A method for generating a mediated reality view of a surgical site, the method comprising: 1. A method for generating a mediated reality view of a surgical site comprising: receiving a pre-operative image representing three-dimensional anatomy of a patient in a first position; receiving pre-operative images representing three-dimensional anatomy of a patient in a first position; receiving real-time images from a camera array after the patient is positioned for surgery in a second position; receiving post-positioning images captured after the patient is positioned for surgery in a second position; identifying, based on the pre-operative image and the real-time images, a set of features visible in both the pre-operative image and the real-time images; identifying, based on the pre-operative images and the post-positioning images, a set of corresponding features visible in the pre-operative images and the post-positioning images; utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images; applying the transformation to the pre-operative image to substantially align the pre- operative image with the real-time images; applying a first transformation to the pre-operative images to substantially align locations of the corresponding features in the pre-operative images to respective locations in the post-positioning images to generate initial transformed pre-operative images; capturing, by a camera array, real-time images of the patient; overlaying the transformed pre-operative image on the real-time images to generate the mediated reality view; and overlaying the initial transformed pre-operative images on the real-time images to generate an initial mediated reality view; and providing the mediated reality view to a display device for display. providing the initial mediated reality view to a display device for display. Claims 22-30 Claims 2-20 The claim 1 of U.S. Patent No. US 11376096 B2 fails to explicitly discloses “utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images”. In same field of endeavor, Chopra (US 20180092698 A1) teaches “utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images” in paragraphs [0110], [0120], [0128-0129], and [0145]. The motivation for doing so would have been to improve the treatment of patients; to improve the coordination of patient data; to improve fidelity of identifying the position of the SDD, and produce a higher resolution image; to minimize the error when correlating the image data as taught by Chopra in paragraphs [0060], [0088], and [0145]. Claims 21-30 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. US 11998401 B2 in view of Chopra (US 20180092698 A1). Although the claims at issue are not identical, they are not patentably distinct from each other. Application: 18649913 Claim 21 U.S. Patent No. US 11998401 B2 Claim 1 21. (Previously Presented) A method for generating a mediated reality view of a surgical site, the method comprising: 1. A method comprising: receiving a pre-operative image representing three-dimensional anatomy of a patient in a first position; receiving real-time images from a camera array after the patient is positioned for surgery in a second position; identifying, based on the pre-operative image and the real-time images, a set of features visible in both the pre-operative image and the real-time images; identifying, based on images of a patient in a first position and post-positioning images of the patient in a second position, a set of corresponding features visible in the images of the patient in the first position and the post-positioning images, wherein the images of the patient in the first position are captured prior to a surgery and the post-positioning images are captured after the patient is positioned for the surgery; applying a first transformation to the images of the patient in the first position to substantially align locations of the corresponding features in the images of the patient in the first position to respective locations in the post-positioning images to generate initial transformed images; receiving real-time images of the patient captured at least in part during the surgery; utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images; identifying three-dimensional coordinates of features in images of the patient in the first position and three-dimensional coordinates of features in the real-time images of the patient; applying the transformation to the pre-operative image to substantially align the pre- operative image with the real-time images; applying a second transformation to the initial transformed images of the patient in the first position based on the three-dimensional coordinates of features in images of the patient in the first position and three-dimensional coordinates of features in the real-time images of the patient to refine an alignment of the locations of features in the initial transformed images to locations of corresponding features in the real-time images to generate refined transformed images; overlaying the transformed pre-operative image on the real-time images to generate the mediated reality view; and overlaying the refined transformed images over the real-time images of the patient to generate an initial mediated reality view; and providing the mediated reality view to a display device for display. providing the initial mediated reality view to a display device for display. Claims 22-30 Claims 2-20 The claim 1 of U.S. Patent No. US 11998401 B2 fails to explicitly discloses “utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images”. In same field of endeavor, Chopra (US 20180092698 A1) teaches “utilizing an optimization algorithm to identify a transformation that minimizes a distance between first coordinates of the set of features in the pre-operative image with second coordinates of the set of features in the real-time images” in paragraphs [0110], [0120], [0128-0129], and [0145]. The motivation for doing so would have been to improve the treatment of patients; to improve the coordination of patient data; to improve fidelity of identifying the position of the SDD, and produce a higher resolution image; to minimize the error when correlating the image data as taught by Chopra in paragraphs [0060], [0088], and [0145]. Allowable Subject Matter Claims 21-30 would be allowable if overcoming the double patenting rejections. Closest Reference Found Closest prior art of record includes: Chopra (US 20180092698 A1) in view of Shahidi (US 7844320 B2), and further in view of Humphries (US 20130249907 A1) either alone or in combination, fail to anticipate or render obvious the limitations required in the claim or the claim as a whole as mentioned above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAI TAO SUN/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Feb 13, 2026
Non-Final Rejection — §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602816
SIMULATED CONFIGURATION EVALUATION APPARATUS AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12603024
DISPLAY CONTROL DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12586310
APPARATUS AND METHOD WITH IMAGE PROCESSING
2y 5m to grant Granted Mar 24, 2026
Patent 12578846
GENERATING MASKED REGIONS OF AN IMAGE USING A PREDICTED USER INTENT
2y 5m to grant Granted Mar 17, 2026
Patent 12579727
APPARATUS AND METHOD FOR ASYNCHRONOUS RAY TRACING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+26.6%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 476 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month