Prosecution Insights
Last updated: April 19, 2026
Application No. 18/132,368

SURGICAL DECISION SUPPORT SYSTEM BASED ON AUGMENTED REALITY (AR) AND METHOD THEREOF

Non-Final OA §103
Filed
Apr 08, 2023
Examiner
SUO, JOSHUA JUNGWOOK
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Smart Surgery
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
2 granted / 2 resolved
+38.0% vs TC avg
Minimal -100% lift
Without
With
+-100.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 0m
Avg Prosecution
10 currently pending
Career history
12
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
57.6%
+17.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 2 resolved cases

Office Action

§103
DETAILED ACTION Specification The specification is objected to because of the following informalities: The middle of paragraph [0030] erroneously indicates FIG. 2B as the drawing it is referencing. However, looking at the descriptions and numbers that correspond to the drawings, it appears that the figure it should be referencing is FIG. 3B. Appropriate correction is required. Allowable Subject Matter Claim 4 and 9 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Amanatullah (US 20190231432 A1) in view of Amanatullah et al. (US 20210000564 A1) (herein after Nazerali). As per claim 1, Amanatullah teaches the claimed: 1. A surgical decision support system based on augmented reality (AR), comprising: a surgical database, configured to store surgical plans, wherein each of the surgical plans comprises an organ model, an operational process, a time point of using surgical instrument, and physiological data, each of the surgical plans is presentable through augmented reality; and (Amanatullah [0039]: “… the computer system stores anatomical and surgical plan data in a set of layers in the virtual patient model.” Amanatullah [0040]: “and store the virtual patient model—in association with the patient—in a database. Later, the computer system can access this virtual patient model from the database during the surgical operation on the patient.” Amanatullah teaches the surgical database that stores surgical plans, as shown in FIG. 3, S120 is a block diagram of the surgical plan database. In addition to that, Amanatullah describes storing virtual patient models (surgical plans) in a database where it can later be accessed during a procedure. Amanatullah [0014]: “As shown in FIGS. 1A and 1B, a computer system can execute Blocks of the method S100 to access and transform scan data of a hard tissue of interest (e.g., bone) of a patient into a virtual patient model representing the hard tissue of interest prior to a surgical operation on the patient.” Amanatullah [0023]: “For example, the computer system can generate a sequence of augmented reality (“AR”) frames aligned to the hard tissue of interest in the surgeon's field of view and serve these augmented reality frames to an AR headset or AR glasses (or to another display in the surgical field) in order to visually indicate to the surgeon compliance with and/or deviation from steps of the surgical plan.” Amanatullah also teaches the surgical plans including organ models, such as bones, and FIG. 1-4 all show a part of the body and where it needs to be cut, the part of the body being the organ model. For the other parts of surgical plans: the operational process, a time point of using surgical instruments, and physiological data; it is obvious to say that these parts are already part of a surgical plan since these things are necessary for planning out surgeries. In addition to that, Amanatullah also teaches that these surgical plans are presentable through augmented reality as stated in paragraph [0023].) a host, linked with the surgical database, and comprising: a non-transitory computer-readable storage medium, configured to store computer- readable program instructions; and a hardware processor, electrically connected to the non-transitory computer- readable storage medium, and configured to execute the computer-readable program instructions to execute: (Amanatullah [0171]: “The computer systems and methods described herein can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. … The instructions can be executed by computer-executable components integrated by computer-executable components integrated with apparatuses and networks of the type described above. The computer-readable medium can be stored on any computer readable media such as RAMs, ROMs, flash memory, EEPROMs, optical devices (CD or DVD), hard drives, floppy drives, or any device. The computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions.” Amanatullah teaches the computer systems (host) that is linked to the surgical databases throughout the 17.1 Modeling section (paragraphs [0164-0172]) as Amanatullah describes the computer system (host) being able to access and use the surgical data. As mentioned in paragraph [0171] above, Amanatullah also teaches the computer readable medium being able to be store instructions and the hardware processor to execute the instructions.) a training module, linked with the surgical database, wherein before a surgery is performed, the training module is configured to select and load a corresponding one of the surgical plans for training, present the loaded surgical plan in augmented reality, enable sensors to continuously sense a free motion of a surgical instrument in a three-dimensional space during training, so as to create an optimal surgical operation; (Amanatullah [0030]: “A computing device executing Blocks of the method S100 can also interface with: an augmented reality device; one or more 2D color cameras, 3D cameras, and/or depth sensors (e.g., a LIDAR sensors, a structured light sensor); sensor-enabled surgical tools; and/or other sensors and actuators within the operating room.” Amanatullah teaches the training module as shown in FIG. 1B & 2. FIG. 2 shows the flowchart of the surgical plans being selected and loaded into the augmented reality device so the surgeon can access and view the training. FIG. 1B similarly shows the virtual guide of the doctor and the specific places of the target areas to cut. Amanatullah also teaches the sensor-enabled surgical tools, which corresponds to the sensors to continuously sense a free motion of a surgical instrument. It would be obvious to say that the sensor-enabled surgical tools will be able to sense the free motion of the instruments used, and since the sensor-enabled tool can interface with AR devices, it will be in a 3D space. Therefore, this training module taught by Amanatullah creates an optimal surgical operation, as shown in FIG 1B & 2, where it gives examples of target and actual contours or cuts to allow surgeons to find the optimal solution.) a sensing module, wherein during a process of performing the surgery, the sensing module is configured to enable the sensors to continuously sense the free motion of the surgical instrument in three-dimensional space to generate a current surgical operation; and (Amanatullah [0022]: “The method is also described below as executed by the computer system to generate augmented reality frames for presentation to a local surgeon in real-time during the surgery—such as through an augmented reality headset worn by the local surgeon or other display located in the operating room—to provide real-time look-back and look-forward guidance to the local surgeon. However, the computer system can implement similar methods and techniques to generate virtual reality frames depicting both real patient tissue and virtual content (e.g., target resected contours of tissues of interest defined in a virtual patient model thus registered to the real patient tissue) and to serve these virtual reality frames to a remote surgeon (or remote student). For example, the computer system can generate and serve such virtual reality frames to a virtual reality headset worn by a remote surgeon in real-time during the surgery in order to enable the remote surgeon to: monitor the surgery; manually adjust parameters of the surgery or surgical plan; selectively authorize next steps of the surgical plan; and/or serve real-time guidance to the local surgeon.” Amanatullah [0041]: “In this variation, the computer system can also: track surgical steps—such as reorientation of the patient or a portion of the patient, incision into the patient's body, excision of a tissue within the patient's body, installation of a surgical implant, etc.—throughout the surgical operation, as described below; and selectively enable and disable layers of the virtual patient model accordingly.” Amanatullah teaches the sensing module, in the computer system as it generates and serves augmented frames to a surgeon in real-time, not only that but provide real-time look back and look forward guidance for the surgeon (generate a current surgical operation). One of the purposes of this is to monitor and track surgical steps of the procedure. The computer system can track incisions and excisions, which require precise positioning and timing to perform correctly, which implies that the tracking is able to sense the free motion of the surgical instruments.) Amanatullah alone does not explicitly teach the remaining claim limitations. However, Amanatullah in combination with Nazerali teaches the claimed: a decision support module, connected to the training module and the sensing module, configured to compare the optimal surgical operation and the current surgical operation, wherein when a comparison difference exceeds a tolerable range, the decision support module outputs a difference message to provide a surgical decision support. (Nazerali [0109]: “In this example, the computer system can calculate: an average, maximum, minimum, and variance of needle paths and deployment durations for the current surgery; retrieve or calculate similar metrics for the set of past surgeries; and calculate a composite score that represents differences between these values across the current and past surgeries. If this difference exceeds a threshold difference, then the computer system can prompt surgical staff to confirm intent of the difference.” Amanatullah [0146]: “Then, if the computer system predicts a high probability of successful outcome (e.g., a probability of successful outcome that exceeds a threshold probability) given the current resected state of one or more tissues of interest in the surgical field, the computer system can prompt the surgeon to move to a next step of the surgical operation. However, if the computer system predicts a low probability of successful outcome given the current resected state of one or more tissues of interest in the surgical field, the computer system can prompt the surgeon to correct the actual resected contour of the hard tissue of interest, such as according to methods and techniques described above to reduce the spatial difference.” Nazerali teaches the decision support module as the computer system calculates a composite score of the differences between the past surgeries (optimal surgical operation) and the current surgeries (current surgical operation), and checks the difference to see if it exceeds a threshold (tolerable range). Amanatullah teaches the difference message that provides a surgical decision, as stated in the passage above. If the probability exceeds a threshold probability, the computer system would prompt the surgeon to move on to the next step, or if was lower, then the computer system would prompt the surgeon to correct any issue before moving on to the next steps.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to compare the optimal surgical operation and current surgical operation as taught by Nazerali with the system of Amanatullah in order to determine if the current surgical operation is being proceeded in the most optimal way possible, and if it is not, then prompting a message to let the surgeon know which next steps to take in order to stay within the threshold. As per claim 6, this claim is similar in scope to limitations recited in claim 1, and thus is rejected under the same rationale. Claims 2 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over Amanatullah (US 20190231432 A1) in view of Amanatullah et al. (US 20210000564 A1) (herein after Nazerali) and in further view of Reisin (US 20210007760 A1) and in further view of Lachenbruch (US 20160338591 A1). As per claim 2, Amanatullah and Nazerali alone does not explicitly teach the claimed limitations. However, Amanatullah and Nazerali in combination with Reisin teaches the claimed: 2. The surgical decision support system based on augmented reality according to claim 1, wherein the training module inputs the free motion of the surgical instrument that is continuously sensed during the training into a machine learning model as training data, to train the machine learning model corresponding to the surgery, and after the machine learning model is trained completely, the free motion of the surgical instrument that is continuously sensed by the sensing module is permitted to input into the machine learning model to recognize the current surgical operation, (Reisin [0014]: “In some embodiments, the present disclosure may include machine learning application(s) configured to determine properties and/or type of the tissue being removed based on the sensing of one or more parameters during the surgical removal of the tissue.” Reisin [0016]: “The sensed parameters depend on the particulars of a given surgical system and its operational principle(s), such as, but not limited to, cutting, resection, aspiration, ultrasound, laser ablation, heat, and/or a combination of the thereof. In such surgical systems, the sensed parameters may include, but not limited to, cutting speed, ultrasound power, ultrasound frequency, ultrasound phase, ultrasound stroke, aspiration flow, vacuum level, irrigation flow, heat generation, heat dissipation, and others.” Reisin [0080]: “In particular, described in detail below are embodiments of surgical systems that utilize machine learning application(s) that is trained to learn, as an example, different sensed parameter values associated with tissue types/properties and determine preferred/optimized system parameters for tissue removal of the specific tissue under surgery.” Reisin teaches the machine learning application (model) that can sense one or more parameters. These parameters include cutting and resection, both of which these procedures require precise and accurate sensors to ensure that the free motion of the surgical instrument can be sensed. Reisin also teaches the machine learning application that can determine preferred or optimized system parameters for surgery. This corresponds to the machine learning model being able to recognize the current surgical operation because if the machine learning operation can determine a preferred or optimized operation for a surgery, it would be obvious to say that it can recognize the current surgical operation.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the machine learning application as taught by Reisin with the system of Amanatullah as modified by Nazerali in order to make the surgery more precise and personalized for each patient. Since every person’s body is different, having trained a machine learning model to be able to recognize the current surgery procedure and give feedback on it can greatly improve the results of the surgery. Amanatullah, Nazerali, and Reisin alone does not explicitly teach the remaining claimed limitations. However, Amanatullah, Nazerali, and Reisin in combination with Lachenbruch teaches the claimed: and the tolerable range is dynamically adjusted based on a recognition result. (Lachenbruch [0036]: “… the information received from the sensor may additionally be used to further adjust the thresholds. In various embodiments, the server 104 includes a machine-learning algorithm that dynamically adjusts the threshold based on the received tissue status information.” Lachenbruch teaches the machine learning algorithm that can dynamically adjust the threshold (tolerable range is dynamically adjusted) based on the received tissue status information (recognition result).) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the machine learning algorithm that can dynamically adjust thresholds as taught by Lachenbruch with the system of Amanatullah as modified by Nazerali and Reisin in order to make the surgery more precise and personalized for each patient. Since every person’s body is different, having trained a machine learning model to be able to track each movement and adjust certain parameters and thresholds for each person will greatly improve the results of the surgery. As per claim 7, this claim is similar in scope to limitations recited in claim 2, and thus is rejected under the same rationale. Claims 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Amanatullah (US 20190231432 A1) in view of Amanatullah et al. (US 20210000564 A1) (herein after Nazerali) and in further view of Shochat (WO 2021105992 A1). As per claim 3, Amanatullah and Nazerali alone does not explicitly teach the claimed limitations. However, Amanatullah and Nazerali in combination with Shochat teaches the claimed: 3. The surgical decision support system based on augmented reality according to claim 1, wherein each of the optimal surgical operation and the current surgical operation comprises a sequence of operation steps and ranges of a moving path of the surgical instrument at different time points, wherein when the decision support module detects that at least one of a difference between the sequences of operation steps of the optimal surgical operation and the current surgical operation and a difference between the ranges of the moving paths of the optimal surgical operation and the current surgical operation at the same time point exceeds the tolerable range, the decision support module marks the difference in a significant manner and embeds the difference into the difference message. (Amanatullah [0024]: “In particular, the computer system can access a surgical plan … defining a sequence of target resected contours (or “resected contours”) of a patient's hard tissue of interest resulting from a sequence of surgical steps performed on the hard tissue of interest during an upcoming surgery.” Shochat (page 4, line 8): “According to some embodiments, there is provided a method of steering a medical instrument toward a target within a body of a subject, the method includes: calculating a planned 3D trajectory for the medical instrument from an entry point to a target in the body of the subject; steering the medical instrument toward the target according to the planned 3D trajectory; determining if a real-time position of the target deviates from a previous target position; if it is determined that the real-time position of the target deviates from the previous target position, updating the 3D trajectory of the medical instrument to facilitate the medical instrument reaching the target, and steering the medical instrument toward the target according to the updated 3D trajectory.” Shochat (page 27, line 18): “In some embodiments, the checkpoints may be predetermined and/or determined during the steering procedure. In some embodiments, the checkpoints may include spatial checkpoints (for example, regions or locations along the trajectory, including, for example, specific tissues, specific regions, length or location along the trajectory (for example, every 20-50 mm), and the like). In some embodiments, the checkpoints may be temporal checkpoints, i.e., a checkpoint performed at designated time points during the procedure (for example, every 2-5 seconds). In some embodiments, the checkpoints may include both spatial and temporal check points.” Shochat (page 28, line 10): “The deviation may be determined compared to a previous time point or spatial point, as detailed above. … In some embodiments, if a deviation in one or more of the abovementioned parameters is detected, the deviation is compared with a respective threshold, to determine if the deviation exceeds the threshold. … In some embodiments, if the real-time location of the medical instrument indicates that the instrument has deviated from the planned 3D trajectory, the user may add and/or reposition one or more the checkpoints along the planned trajectory, to direct the instrument back to the planned trajectory. In some embodiments, the processor may prompt the user to add and/or reposition checkpoint/s. In some embodiments, the processor may recommend to the user specific position/s for the new and/or repositioned checkpoints. Such a recommendation may be generated using image processing techniques and/or machine learning algorithms.” Amanatullah teaches the sequence of surgical steps, and Shochat teaches the planned 3D trajectory for a medical instrument at specific check points that can include temporal check points and time points, which corresponds to the range of a moving path of a surgical instrument at different time points. In addition, as stated in the claim 1 limitation, if these details can be accessed and created for a current surgical operation, then it would be obvious to say there are past, and optimal, surgical operations that have been created and can be accessed. Not only that, but, the past (optimal) and current surgeries can be compared to see if there are any differences between the operation steps or the ranges of the moving paths. Therefore, after finding the differences, the system of Amanatullah can include the additional differences in the differences message. As described in Shochat, if there are deviations from the planned 3D trajectory, the user is prompted by the processor to add, reposition, and even recommend changes. Thus, if there are any deviations or differences, instead of prompting the user for changes, the system may simply send a message notifying of the differences.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the planned 3D trajectory of the medical instrument as taught by Shochat with the system of Amanatullah as modified by Nazerali in order to compare specific pathways of surgical instruments at different time points to determine whether or not the current pathway is optimal. Also to use the prompting of users of any changes or deviations during surgery to indicate any issues or problems that may rise up during surgery, and allow users to have more time to solve and fix these issues. As per claim 8, this claim is similar in scope to limitations recited in claim 3, and thus is rejected under the same rationale. Claims 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Amanatullah (US 20190231432 A1) in view of Amanatullah et al. (US 20210000564 A1) (herein after Nazerali) and in further view of Nawana (US 9129054 B2). As per claim 5, Amanatullah and Nazerali alone do not explicitly teach the claimed limitations. However, Amanatullah and Nazerali in combination with Nawana teaches the claimed: 5. The surgical decision support system based on augmented reality according to claim 1, wherein the decision support module continuously detects a surgical operation behavior, when the surgical operation behavior is interrupted or delayed abnormally, the decision support module simultaneously displays the organ model, the operational process, the time point of using surgical instrument and the physiological data of the loaded surgical plan, to provide assistive support and guidance. (Nawana [0036]: “For yet another example, the operation module can provide the electronic feedback on a display, and the operation module can provide additional electronic information regarding the actual performance of the selected invasive treatment on the display including any one or more of a fluoroscopic image of the patient, vital signs of the patient, neural monitoring outputs, surgical techniques videos, camera feeds from outside a room where the selected invasive treatment is being performed, power usage of instruments, and controls for any one or more devices that gather the additional electronic information and provide the additional electronic information to the operation module. … For another example, the operation module can allow for user selection of anatomy of the patient to be shown on a display in any one or more of a plurality of visualization options, e.g., 3D images, holograms, and projections.… For yet another example, the operation module can determine at least one of a time length of retraction of a tissue during the actual performance of the selected invasive treatment, an amount of the tissue retraction, and an amount of pressure being placed on at least one of tissue and nerves as a result of the retraction, and the operation module can trigger an alarm if any one or more of the time length reaches a predetermined threshold amount of time, the amount of tissue retraction reaches a predetermined amount of tissue, and the amount of pressure reaches a predetermined amount of pressure.” Nawana [0179]: “The plan tracking module 226 can be configured to match the patient's position with a pre-surgery plan, e.g., a plan indicated in a saved simulated surgery or a plan for a typical procedure of a same type as the surgery being performed.” Nawana teaches the operation module (decision support module) that can provide additional information regarding the actual performance of the procedure (surgical operation behavior). Nawana also teaches that the operation module can trigger an alarm if the time length of a predetermined threshold of amount of time is reached (surgical operation behavior is delayed). The operation module can also display an anatomy of the patient visualized on a hologram or projection (organ model) and vital signs of a patient (physiological data). Nawana teaches the plan tracking module that contains the pre-surgery plans (surgical plans), where Nawana gives the example of a plan for a typical procedure of a same type of surgery being performed, therefore, it would be obvious to say that this plan would contain operational processes and time points of using surgical instruments. Nawana [0163] states “The operation module 204 can include a plan tracking module 226”. Therefore, it can be said that the operation module can display anything that the plan tracking module can. Lastly, Nawana [0161] states “The operation module 204 can generally provide users of the system 10 with an interface for enhancing performance of a surgical procedure in an OR (or other location) and for gathering data for future analysis.” which therefore, the operation module provides assistive support and guidance) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the operation module as taught by Nawana with the system of Amanatullah as modified by Nazerali in order to give users assistance and guidance if any issues come up during surgery, not only that but also to enhance the surgical procedure and gather surgical data for future analysis. As per claim 10, this claim is similar in scope to limitations recited in claim 5, and thus is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA SUO whose telephone number is (571) 272-8387. The examiner can normally be reached Mon-Fri 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOSHUA SUO/Examiner, Art Unit 2616 /DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Apr 08, 2023
Application Filed
Dec 03, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597191
FACE IMAGE GENERATION METHOD AND DEVICE FOR GENERATING FULLY-CONTROLLABLE TALKING FACE
2y 5m to grant Granted Apr 07, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
0%
With Interview (-100.0%)
2y 0m
Median Time to Grant
Low
PTA Risk
Based on 2 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month