Prosecution Insights
Last updated: April 19, 2026
Application No. 18/568,938

USING MACHINE LEARNING AND 3D PROJECTION TO GUIDE MEDICAL PROCEDURES

Final Rejection §103
Filed
Dec 11, 2023
Examiner
KIM, KAITLYN EUNJI
Art Unit
3797
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Regents of the University of California
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-11.7% vs TC avg
Strong +66% interview lift
Without
With
+65.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
37 currently pending
Career history
49
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
42.2%
+2.2% vs TC avg
§102
21.4%
-18.6% vs TC avg
§112
22.5%
-17.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1, 3, 5-8, 10-33, and 35 are pending in this application. Claims 16 and 23-33 are withdrawn, and Claims 1, 3, 5-8, 10-15, 17-22, and 35 have been examined on the merits. Claim Objections Claim 21 is objected to because of the following informalities: In claim 21, line 3, “radiographic imaging modalities. By using machine learning algorithms” should be “radiographic imaging modalities, wherein by using machine learning algorithms” Appropriate correction is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 7, 10, 13-15, 17-22 are rejected under 35 U.S.C. 103 as being unpatentable over Gibby (US20190365498A1) in view of Poltaretskyi (US20190380792A1) and in further view of Nahum (US20230008386A1). Regarding Claim 1, Gibby teaches a system for guiding a surgical or medical procedure (corresponding disclosure in at least [0020], where the system helps guide the surgeon during procedures “The overlay of an acquired medical image can assist a surgeon, doctor or other medical professionals in more accurately performing a treatment or operation upon a patient's anatomy”), the system comprising: a depth camera for acquiring images and/or video from a three-dimensional surface of a subject during the surgical or medical procedure (corresponding disclosure in at least [0041], where the headset used for the surgery contains a depth camera “The AR headset may include holographic lenses and a depth camera”); a projector for projecting markings and/or remote guidance markings during the surgical or medical procedure such that these markings and guides enhance procedural decision-making (corresponding disclosure in at least [0047], where the device uses pins, or markings to mark the surgical site “The AR headset can anchor or “pin” virtual images or objects into place with respect to the real environment or room”); and a trained machine learning guide generator in electrical communication with the depth camera and the projector (corresponding disclosure in at least [0083], where any programs in a higher level (machine learning) are in communication with the processor, “a program in a higher level language may be compiled into machine code in a format that may be loaded into a random access portion of the memory device 820 and executed by the processor 812” which is in communication with the projector and camera [0022] “A processor associated with the AR headset”), the trained machine learning guide generator implementing a trained machine learning model specific to the three-dimensional surface of the subject (corresponding disclosure in at least [0031], where a surgical site can be highlighted with an ML model (neural network) “an organ to be resected can be highlighted with either manual technique, anatomic edge detection or neural networks where the system is trained to find the organ”), the trained machine learning guide generator configured to control the projector using the trained machine learning model such that surgical markings are projected onto the subject (corresponding disclosure in at least [0031], where a machine learning model projects the markings onto the subject “a doctor may create one or more 2D (two dimensional) augmentation tags 150 a-b each located on one of a plurality of layers of the acquired medical image. The plurality of 2D (two dimensional) augmentation tags can be automatically joined together by an application to form a 3D augmentation tag or 3D shape that extends through multiple layers of the acquired medical image”), the trained machine learning model being trained by: creating a general detection model from a first set of annotated digital images, each annotated digital image being marked or annotated a plurality of anatomic features (corresponding disclosure in at least [0036], where the anatomical structure is marked and identified “Polyps or possible cancerous structures in the colon can be identified using neural network machine learning techniques”), each digital images including a plurality of anthropometric markings identified by an expert surgeon (corresponding disclosure in at least [0029], where the mapping or markings are viewable “This augmentation tag may represent an anatomical structure, a point where an incision may be made, or other mappings and markings for the medical procedure”). Gibby does not teach training the general detection model by backpropagation with a second set of annotated digital images from subjects that are surgical candidates. Poltaretskyi, in a similar field of endeavor, teaches a similar concept (use of neural network for feature detection) of training the general detection model by backpropagation with a second set of annotated digital images from subjects that are surgical candidates (corresponding disclosure in at least [0855], where backpropagation is used for training the machine learning model “Computing system 12202 may apply one of various techniques to use the training datasets to train the NN. For example, computing system 12202 may use one of the various standard backpropagation algorithms” and further in [0825], where data used for the model is patient data “Each respective training dataset corresponds to a different training data patient in a plurality of training data patients and comprises a respective training input vector and a respective target output vector”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the backpropagation for training the detection model using images for subjects that are surgical candidates (patients) as taught by Poltaretskyi. One of the ordinary skill in the art would have been motivated to incorporate this because the model is able to make better predictions and reduce differences in predicted and actual output. Gibby and Poltaretskyi do not teach a projector for projecting markings and/or markings onto the three-dimensional surface of the subject. Nahum, in a similar field of endeavor, teaches a similar concept (surgical planning) of projection onto the surface of the subject (corresponding disclosure in at least [0134], where the projection is either completed with a projector projecting the markings onto the patient or an AR device “The augmented reality device can also be a screen placed in proximity to the patient 110, the screen displaying the planned trajectory. The augmented reality device can also comprise a projector projecting the planned trajectory onto the body of the patient 110 or can be a holographic device”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a projection onto the three dimensional surface of the subject as taught by Nahum. One of the ordinary skill in the art would have been motivated to incorporate this because the projection provides additional guidance in an accurate manner and is another method of providing trajectory guidance. Regarding Claim 3, the combined references noted above teach the limitations of Claim 1 and Gibby further teaches a combination of the depth camera and the projector cooperate to operate as a structured light scanner for creating three-dimensional digital images of the three-dimensional surface of the subject or another area relevant to a surgical procedure (corresponding disclosure in at least [0041], where there is a projector (holographic lens) and depth camera “The AR headset may include holographic lenses and a depth camera”, and further in [0055], where the device produces a 3D image of the surgical site “the augmented reality system can map the external contours of the patient using the depth camera or distance ranging cameras and can create a polygonal mesh that can be compared to the surface layer of the patient's 3D data obtained with imaging techniques”). Regarding Claim 7, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein the trained machine learning guide generator or another computing device includes a machine learning algorithm trained to identify anatomical structures identified by radiological imaging techniques (corresponding disclosure in at least [0036], where the ML model can identify the polyp or structures in the colon (anatomical structure “The image of the colon may be created using neural network machine learning methods. Polyps or possible cancerous structures in the colon can be identified using neural network machine learning techniques. The neural networks may have been trained using a large number of medical training data cases to be able to find or classify the polyp”). Regarding Claim 10, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein the trained machine learning guide generator is configured to guide sequential steps of the surgical or medical procedure (corresponding disclosure in at least [0068], where the ML model guides the steps of the procedure “morphometry can be used to identify procedure road markers during a medical procedure such as endoscopy, arthroscopy, laproscopy, etc. … the system may match one of the 5 structures viewed through a camera image and show a surgeon where the scope of the surgeon is located in the patient anatomy based on seeing a physical object that matches a virtual object that has already been recorded using morphometry”) by dynamically adjusting the surgical markings projected onto the subject during the surgical or medical procedure (corresponding disclosure in at least [0048], where the device identifies/adjusts the live image (during procedure) “A live image of patient anatomy and the surrounding environment or room may be obtained using a live image camera of the AR headset 412. A patient marker 420 that is located on the patient and can be identified in the image or video of the patient anatomy as obtained by the AR headset”). Regarding Claim 13, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein a remote operator can interact with a three-dimensional digital image of the three-dimensional human surface and propose surgical markings in order to add surgical guidance and/or make adjustments thereof (corresponding disclosure in at least [0033], where the medical professional can interact with the image to propose or select points for guidance and adjustment “a medical professional may annotate a 2D slice or layer of an acquired medical image to create an augmentation tag 172. The medical professional may notate the image by carefully outlining the kidney or by selecting a point in the center mass of the kidney tissue and requesting an application to find the boundary of the kidney. This 2D augmentation tag can be turned into a three dimensional (3D) augmentation tag is described in this disclosure”). Regarding Claim 14, the combined references noted above teach the limitations of Claim 1, and Poltaretskyi further teaches wherein the trained machine learning guide generator is configured to acquire data during surgical or medical procedures to improve accuracy of placing the surgical markings for future surgical or medical procedures (corresponding disclosure in at least [0551], where data is documented for future reference in surgical procedures “optical code reading, RFID reading, or other machine automation may be incorporated into a visualization device, like visualization device 213 in order to allow such automation in an MR environment without the need for additional bar code readers or RFID readers. In this way, surgical procedure documentation may be improved by tracking and verifying that the correct surgical items are used at the proper states of the procedure”). Regarding Claim 15, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein the trained machine learning guide generator executes one or more neural networks (corresponding disclosure in at least [0031], where multiple neural networks are used “an organ to be resected can be highlighted with either manual technique, anatomic edge detection or neural networks where the system is trained to find the organ”). Regarding Claim 17, the combined references noted above teach the limitations of Claim 1, and Poltaretskyi further teaches wherein the trained machine learning guide generator executes a high-resolution neural network (corresponding disclosure in at least [0821], where a high resolution neural network (deep neural network) is used “such AI techniques may be employed during preoperative phase 302 (FIG. 3) or another phase of a surgical lifecycle. Artificial neural networks (ANNs), including deep neural networks (DNNs), have shown great promise as classification tools”). Regarding Claim 18, the combined references noted above teach the limitations of Claim 1, and Poltaretskyi further teaches wherein the trained machine learning guide generator is configured to down sample images in parallel with a series of convolutional layers that preserve dimensionality, allowing for intermediate representations with higher dimensionality (corresponding disclosure in at least [0821], where there is a stacked neural network used which would allow for intermediate representations with higher dimensionality “N 12300 may be or include one or more other forms of artificial neural networks such as, for example, deep Boltzmann machines; deep belief networks; stacked autoencoders; etc. Any of the neural networks described herein can be combined (e.g., stacked) to form more complex networks”). Regarding Claim 19, the combined references noted above teach the limitations of Claim 17, and Gibby further teaches wherein the trained machine learning guide generator is trained by: providing a first set of annotated images of a predetermined area of a subject's surface to a generic model to form a point detection model (corresponding disclosure in at least [0031], where there are annotated images “the 2D image or one slice of the acquired medical image can be annotated with one or more 2D images” and further in [0036], “a medical professional can mark the polyp with an augmentation tag or use automatic assistance of an application to more easily mark the polyp”); and training the trained machine learning guide generator using the point detection model with a second set of annotated images annotated with surgical annotation for each surgical marking (corresponding disclosure in at least [0036], where an ML model is used with the identification of surgical markings (the finding of polyps) “The neural networks may have been trained using a large number of medical training data cases to be able to find or classify the polyp”). Regarding Claim 20, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein the trained machine learning guide generator or another computing device is configured to receive and store subject-specific radiologic image data (corresponding disclosure in at least [0064], where the ML information is stored in a database “the patient marker 530 may include information identifying the patient and pre-measured morphometric measurements 526 stored in a database”). Regarding Claim 21, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein the trained machine learning guide generator is further trained to identify anatomical structures from radiographic imaging modalities (corresponding disclosure in at least [0028], where the images are radiographic images “a plurality of augmentation tags may be linked together to represent one anatomical structure which crosses separate layers of the acquired medical image or radiological image. Thus, each tag may identify that anatomical structure in a separate layer” and further in [0036], where the ML model is specified for this technique “The image of the colon may be created using neural network machine learning methods. Polyps or possible cancerous structures in the colon can be identified using neural network machine learning techniques”). Regarding Claim 22, the combined references noted above teach the limitations of Claim 1, and Gibby further teaches wherein the projector projects deep anatomy onto the predetermined surgical site of the subject as well as directly onto deeper tissues to guide steps during surgery (corresponding disclosure in at least [0034], where projections go into the deep tissue (kidney tissue) with guidance “The 3D augmentation tag can be created by a medical professional requesting the application to take the 2D augmentation tag and find the entire kidney shape by identifying tissue similar to kidney tissue selected by a medical professional in the acquired medical image”). Claims 5, 6, 8, 11, and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Gibby (US20190365498A1), Poltaretskyi (US20190380792A1), and Nahum (US20230008386A1) as taught in Claim 1, and in further view of Sayadi (US20200104974A1). Regarding Claim 5, the combined references noted above teach the limitations of Claim 1 but do not teach wherein the surgical or medical procedure is cleft lip surgery, ear reconstruction for microtia, cranial vault reconstruction for craniosynostosis, breast reconstruction after cancer resection, or reconstruction of traumatic or oncologic defects. Sayadi, in a similar field of endeavor, teaches a similar concept (projections of a surgical area), where wherein the surgical or medical procedure is cleft lip surgery, ear reconstruction for microtia, cranial vault reconstruction for craniosynostosis, breast reconstruction after cancer resection, or reconstruction of traumatic or oncologic defects (corresponding disclosure in at least [0009], where the surgical procedure is used in breast reconstruction “show surgical marking (dotted line) for breast reconstruction being projected upon by the present invention”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the use of the medical procedure for breast reconstruction after cancer resection as taught by Sayadi. One of the ordinary skill in the art would have been motivated to incorporate this because the medical procedure captures the patient anatomy, which includes the breast region, and thus can be detected and modified to focus on that particular area. Regarding Claim 6, the combined references noted above teach the limitations of Claim 1 and the trained machine learning guide generator ([0083] of Gibby) but do not teach binding a subject's anatomy to projected surgical markings such that projections remain stable with movement of the subject. Sayadi, in a similar field of endeavor, teaches a similar concept (projections of a surgical area), of binding a subject's anatomy to projected surgical markings such that projections remain stable with movement of the subject (corresponding disclosure in at least [0006], where projects are maintained with different conformations of the pateitn “The medical information also adapts to the movements of the underlying surface while maintaining shape conformation. This projected information can be altered in real time (moved, repositioned, stretched, lengthened, shortened) by the user while maintaining shape conformation using modalities such as but not limited to hand gestures, markers or digital pens, tablets or other mobile devices. The projected information can change based on the view point of the user”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated binding the anatomy to projected surgical markings as taught by Sayadi. One of the ordinary skill in the art would have been motivated to incorporate this because this ensures that the markings are still in correspondence with the anatomy despite any movements that may occur. Regarding Claim 8, the combined references noted above teach the limitations of Claim 1 and the trained machine learning guide generator ([0083] of Gibby) and the depth camera ([0041] of Gibby) but do not teach binding surface anatomy to surface anatomy captured on radiographs such that applying machine learning algorithms to each identifies locations of shared surface landmarks with images of normal or pathologic underlying anatomic structures projected onto a surface of the three-dimensional surface of the subject. Sayadi, in a similar field of endeavor, teaches binding surface anatomy to surface anatomy captured on radiographs (corresponding disclosure in at least [0006], where the projections are conformed to the registration points “projection system which is used to project medical information onto the surface of the human body such that the medical information conforms and registers to the contour of the underlying shape of the region it is being projected upon”) such that applying to each identifies locations of shared surface landmarks with images of normal or pathologic underlying anatomic structures projected onto a surface of the predetermined surgical site (corresponding disclosure in at least [0026], where the locations of the landmarks are identified “A surgeon preforming craniofacial surgery may project a virtual plan onto the surface of the skull guiding them where to make cuts into the mandible. The projected information registers and conforms to the contour of the mandible maintaining accuracy of the cutting locations that were planned before surgery”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated binding the anatomy to projected surgical markings as taught by Sayadi. One of the ordinary skill in the art would have been motivated to incorporate this because this ensures that the markings are still in correspondence with the anatomy despite any movements that may occur. Regarding Claim 11, the combined references noted above teach the limitations of Claim 1 and the machine learning model ([0083] of Gibby), but do not teach placement of the surgical markings in real-time regardless of an angle of the three-dimensional surface of the subject. Sayadi, in a similar field of endeavor, teaches placement of the surgical markings in real-time regardless of an angle of the predetermined surgical site (corresponding disclosure in at least [0006], where markings are placed in real time “The medical information also adapts to the movements of the underlying surface while maintaining shape conformation. This projected information can be altered in real time (moved, repositioned, stretched, lengthened, shortened) by the user while maintaining shape conformation using modalities such as but not limited to hand gestures, markers or digital pens, tablets or other mobile devices”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the placements of markings in real time. One of the ordinary skill in the art would have been motivated to incorporate this because the placements can then be adjusted during potential patient movements. Regarding Claim 12, the combined references noted above teach the limitations of Claim 1 and the trained machine learning guide generator ([0083] of Gibby), but do not teach interacting with the projector to identify machine-learned landmarks, bind these to a given subject, and projecting these landmarks and guides directly onto the three-dimensional surface of the subject. Sayadi, in a similar field of endeavor, teaches interacting with the projector to identify machine-learned landmarks, bind these to a given subject, and projecting these landmarks and guides directly onto the predetermined surgical site (corresponding disclosure in at least [0048], where the markers are presented on the patient “The patient marker 420 may include information identifying the patient, anatomy to be operated upon, a patient orientation marker 422, and/or an image inversion prevention tag 422. The patient orientation marker 422 and the image inversion prevention tag 422 may be separate or combined into one marker or tag with each other or the patient marker 420”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated projecting landmarks on an identified area as taught by Sayadi. One of the ordinary skill in the art would have been motivated to incorporate this because the landmarks are more accurately placed to designated areas in the region prior to the surgery. Claims 35 and 36 are rejected under 35 U.S.C. 103 as being unpatentable over Gibby (US20190365498A1) in view of Nahum (US20230008386A1). Regarding Claim 35, Gibby teaches a system for guiding a surgical or medical procedure (corresponding disclosure in at least [0020], where the system helps guide the surgeon during procedures “The overlay of an acquired medical image can assist a surgeon, doctor or other medical professionals in more accurately performing a treatment or operation upon a patient's anatomy”), the system comprising: a depth camera for acquiring images and/or video from a three-dimensional surface of a subject during the surgical or medical procedure (corresponding disclosure in at least [0041], where the device used for the surgery contains a depth camera “The AR headset may include holographic lenses and a depth camera”); and a projector for projecting markings and/or remote guidance markings during the surgical or medical procedure such that these markings and guidance markings enhance procedural decision-making (corresponding disclosure in at least [0047], where the device uses pins, or markings to mark the surgical site “The AR headset can anchor or “pin” virtual images or objects into place with respect to the real environment or room”). Gibby does not teach a projector for projecting markings and/or markings onto the three-dimensional surface of the subject. Nahum, in a similar field of endeavor, teaches a similar concept (surgical planning) of projection onto the surface of the subject (corresponding disclosure in at least [0134], where the projection is either completed with a projector projecting the markings onto the patient or an AR device “The augmented reality device can also be a screen placed in proximity to the patient 110, the screen displaying the planned trajectory. The augmented reality device can also comprise a projector projecting the planned trajectory onto the body of the patient 110 or can be a holographic device”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a projection onto the three dimensional surface of the subject as taught by Nahum. One of the ordinary skill in the art would have been motivated to incorporate this because the projection provides additional guidance in an accurate manner and is another method of providing trajectory guidance. Regarding Claim 36, Gibby teaches a system for guiding a surgical or medical procedure (corresponding disclosure in at least [0020], where the system helps guide the surgeon during procedures “The overlay of an acquired medical image can assist a surgeon, doctor or other medical professionals in more accurately performing a treatment or operation upon a patient's anatomy”), the system comprising: a depth camera for acquiring images and/or video from a three-dimensional human surface of a subject during the surgical or medical procedure (corresponding disclosure in at least [0041], where the headset used for the surgery contains a depth camera “The AR headset may include holographic lenses and a depth camera”); a projector for projecting markings and/or remote guidance markings during the surgical or medical procedure such that these markings and guides enhance procedural decision-making (corresponding disclosure in at least [0047], where the device uses pins, or markings to mark the surgical site “The AR headset can anchor or “pin” virtual images or objects into place with respect to the real environment or room”); and a trained machine learning guide generator in electrical communication with the depth camera and the projector (corresponding disclosure in at least [0083], where any programs in a higher level (machine learning) are in communication with the processor, “a program in a higher level language may be compiled into machine code in a format that may be loaded into a random access portion of the memory device 820 and executed by the processor 812” which is in communication with the projector and camera [0022] “A processor associated with the AR headset”), the trained machine learning guide generator implementing a trained machine learning model specific to the three-dimensional surface of the subject (corresponding disclosure in at least [0031], where a surgical site can be highlighted with an ML model (neural network) “an organ to be resected can be highlighted with either manual technique, anatomic edge detection or neural networks where the system is trained to find the organ”), the trained machine learning guide generator configured to control the projector using the trained machine learning model such that surgical markings are projected onto the subject (corresponding disclosure in at least [0036], where the anatomical structure is marked and identified “Polyps or possible cancerous structures in the colon can be identified using neural network machine learning techniques”). Gibby and Poltaretskyi do not teach a projector for projecting markings and/or markings onto the three-dimensional surface of the subject. Nahum, in a similar field of endeavor, teaches a similar concept (surgical planning) of projection onto the surface of the subject (corresponding disclosure in at least [0134], where the projection is either completed with a projector projecting the markings onto the patient or an AR device “The augmented reality device can also be a screen placed in proximity to the patient 110, the screen displaying the planned trajectory. The augmented reality device can also comprise a projector projecting the planned trajectory onto the body of the patient 110 or can be a holographic device”). It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a projection onto the three dimensional surface of the subject as taught by Nahum. One of the ordinary skill in the art would have been motivated to incorporate this because the projection provides additional guidance in an accurate manner and is another method of providing trajectory guidance. Response to Arguments Applicant’s arguments with respect to the 35 U.S.C 103 rejections have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Relevant prior arts include US20210106386A1 (visual markers for surgical procedures), US10595844B2 (tracking during medical procedure), and US20250009231A1 (identification of a tissue and tracking of features). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN KIM whose telephone number is (571)272-1821. The examiner can normally be reached Monday-Friday 6-2 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.E.K./ Examiner, Art Unit 3797 /SERKAN AKAR/ Primary Examiner, Art Unit 3797
Read full office action

Prosecution Timeline

Dec 11, 2023
Application Filed
Dec 11, 2023
Response after Non-Final Action
Jul 28, 2025
Applicant Interview (Telephonic)
Jul 29, 2025
Examiner Interview Summary
Sep 09, 2025
Applicant Interview (Telephonic)
Sep 09, 2025
Examiner Interview Summary
Sep 18, 2025
Non-Final Rejection — §103
Nov 24, 2025
Examiner Interview Summary
Dec 18, 2025
Response Filed
Mar 10, 2026
Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+65.7%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month