Prosecution Insights
Last updated: April 19, 2026
Application No. 17/735,079

INTRAOPERATIVE IMAGE-GUIDED TOOLS FOR OPHTHALMIC SURGERY

Non-Final OA §103§112
Filed
May 02, 2022
Examiner
HUSSAINI, ATTIYA SAYYADA
Art Unit
3792
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Microsurgical Guidance Solutions LLC
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
64%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
16 granted / 31 resolved
-18.4% vs TC avg
Moderate +12% lift
Without
With
+12.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
5.0%
-35.0% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
18.6%
-21.4% vs TC avg
§112
25.6%
-14.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 17 February 2026 has been entered. Response to Amendment This Office Action is responsive to the RCE filed 17 February 2025. As directed by the amendment: Claims 1, 3-5, 14, 16-17, and 20-21 are amended and no new claims have been cancelled or added. Thus, Claims 1-8 and 10-22 are presently pending and under examination. Response to Arguments Response to Amendments Regarding 35 USC § 112 Applicant’s amendments to claims have overcome the 112(b) rejection previously set forth in the Final Office Action mailed 17 September 2025 for claim 17. However, Claim 14 is now rejected under 112(b), as described in detail below. Response to Arguments Regarding 35 USC § 102/103 Applicant’s arguments, see pg. 9-10 of Remarks, filed 17 February 2026, with respect to the rejection(s) of claim(s) 1-6, 10-15, and 18-22 under 35 USC 103 as unpatentable over Buch (US 2021/0307841 A1) in view of Manzke et al. (US Patent 11,304,686 B2) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Chow et al. (US Patent 10,758,309 B1), hereinafter Chow. Applicant has amended independent claims 1, 14, and 20 to recite the limitation of “ophthalmic surgical procedure…construct augmented visual images that include the real-time visual images and the image features developed by the AI model, based on the augmented visual images, automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure”, (emphasis added) and further argues that Manzke does not teach or suggest the use of ML-controlled imaging as described and claimed in this application and does not include any smart or real-time guidance. Examiner agrees with applicant’s persuasive arguments, and has instead used Chow to reject the amended claim limitations and the dependent claims that depend from the amended claim limitation under a new ground of rejection 35 USC 103 (Buch in view of Chow), as described in detail below. Therefore, claims 1-6, 10-15, and 18-22 are rejected as described above under 35 USC 103 (Buch in view of Chow). No additional specific arguments were presented with previous 35 U.S.C. 103 rejections of dependent claims 7-8 and 16-17, nor specifically with respect to the previously cited prior art references: Panescu and Beelen. Therefore, claims 7-8 and 16-17 are rejected as described above under 35 USC 103. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 14 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 14 recites the limitation "the ophthalmic procedure" in line 9. There is insufficient antecedent basis for this limitation in the claim. Applicant is suggested to amend the limitation to recite “the ophthalmic surgical procedure”, as recited in line 4 of claim 14. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-6, 10-15, and 18-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch et al. (US 2021/0307841 A1, previously cited), hereinafter Buch in view Chow et al. (US Patent 10,758, 309 B1), hereinafter Chow. Regarding claim 1, Buch discloses an image-guided tool for surgical procedures ([0002] “systems…for generating and providing artificial intelligence assisted surgical guidance”) comprising: a processor (Claim 11: “the system comprising: at least one processor”); a display device coupled to the processor ([0035] “Surgical guidance generator 104, in one example, may output the surgical guidance on a video screen that surgeons use during image guided surgery.”); an imaging system coupled to the processor ([0018] “a video feed of the operative field can be obtained with either an overhead operating room (OR) camera, microscope, or endoscope”); a memory device ([0006] “non-transitory computer-readable media, such as disk memory devices, chip memory devices”), coupled to the processor storing instructions executable by the processor (Claim 21: “A non-transitory computer readable medium having stored thereon executable instructions that when executed by a processor”), the memory device including an artificial intelligence (AI) model (Claim 21: neural network), to cause the processor to: receive, from the imaging system, visual images in real-time of a surgical field during an ophthalmic surgical procedure (Claim 21: “receiving, by the neural network, a live feed of video images from a surgery”, [0022] “The AOA will also be able to caution surgeons if they are approaching structures designated as “do not manipulate,” such as spinal cord during intradural spine surgery or the posterior capsule of the lens in cataract surgery.”, Examiner notes that the artificial operative assistant disclose in the prior art reference is used in cataract surgery and thus the other limitations presented in this prior art reference in regards to the AOA can be utilized in cataract surgery. ); extract regions of interest in the surgical field using information provided by the Al model (Claim 21: “classifying, by the neural network, at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video images”); select a region of interest computed by the Al model ([0042] “A flow warping algorithm calculates the movement trajectory of each segmented object from the preceding frames to guide the RPN in choosing regions of interest.”); compute image features for a surgical phase of the ophthalmic surgical procedure being performed ([0039] “The segmentation data in FIG. 3B may be used to train neural network 100 to identify the anatomical objects (the blood vessel and the spinal cord) and the surgical instrument”, [0023] “chronological and positional relationships between anatomical objects, surgical objects, and tissue manipulation over the course of a procedure will be entrained by the network. Quantifying these relationships will enable real-time object tracking”,); construct augmented visual images that include the real-time visual images and the image features developed by the AI model ([0044] “The artery, spinal cord and INSTRUMENT1 are segmented by the network illustrated in FIG. 4, and that segmentation is depicted in the video frames as transparent color of different colors on top of each anatomical object or image. For example, the artery may be overlaid with a blue transparent overlay, the spinal cord may be overlaid with a yellow transparent overlay, and the instrument may be overlaid with a purple transparent overlay.”, [0039] “Once trained, the neural network can output segmented overlays, such as that illustrated in FIG. 3C.”) Buch fails to disclose based on the augmented visual images, automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure. However, Chow teaches a method and system for controlling (or facilitating control of) surgical tools during surgical procedures (Column 1, lines 8-10) wherein the processor (computer-vision processing system 110): receive, from the imaging system, visual images in real-time of a surgical field during an ophthalmic surgical procedure (view Figure 1: real-time data collection system 145, Column 12, lines 1-6: “The image data can be received from a real-time data collection system 145, which can include (for example) one or more devices (e.g., cameras) located within an operating room and/or streaming live imaging data collected during performance of a procedure.”, Column 4, lines 23-33: phacoemulsification device); compute image features for a surgical phase of the ophthalmic surgical procedure being preformed (Column 12, lines 7-24: “The machine-learning model can be configured to detect and/or characterize objects within the image data. The detection and/or characterization can include segmenting the image(s). In some instances, the machine-learning model includes or is associated with a preprocessing (e.g., intensity normalization, resizing, etc.) that is performed prior to segmenting the image(s). An output of the machine-learning model can include image-segmentation data that indicates which (if any) of a defined set of objects are detected within the image data, a location and/or position of the object(s) within the image data, and/or state of the object. State detector 150 can use the output from execution of the configured machine-learning model to identify a state within a surgical procedure that is then estimated to correspond with the processed image data. Procedural tracking data structure 155 can identify a set of potential states that can correspond to part of a performance of a specific type of procedure.”); construct augmented visual images that include the real-time visual images and the image features developed by the AI model (Column 13, lines 29-35 :” Output generator 160 can also include an augmentor 175 that generates or retrieves one or more graphics and/or text to be visually presented on (e.g., overlaid on) or near (e.g., presented underneath or adjacent to) real-time capture of a procedure. Augmentor 175 can further identify where the graphics and/or text are to be presented (e.g., within a specified size of a display).”, Column 13, lines 42-55, Column 2, lines 31-38: artificial intelligence model used to analyze video feed); based on the augmented visual images, automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure (Column 1, lines 47-58: "The camera is configured to capture live video within a field of view. The live video feed generated by the camera can be fed into the trained machine-learning model. The machine-learning model is trained, and thus, is configured to recognize patterns or classify objects within image frames of the live video feed. A procedural control system may communicate with the computer-vision processing system to control (or facilitate control of) the surgical tools based on the recognized patterns or classified objects that are outputted from the machine-learning model (e.g., the output being a result of processing the live video feed using the trained machine-learning model)", Column 2, lines 39-47: "In some implementations, the control (or facilitated control) of a surgical tool may be automatic", Column 4, lines 23-33: "For example, if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.", Column 4, lines 34-61). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Buch to incorporate the teachings of Chow to have based on the augmented visual images automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 2, Buch in view of Chow teaches the image-guided tool of claim 1 (as shown above). Buch further discloses wherein the imaging system is located external to the surgical field and coupled to the processor ([0035] “The video screen may be standalone display separate from the surgical field.”, [0018] “a video feed of the operative field can be obtained with either an overhead operating room (OR) camera”). Regarding claim 3, Buch in view of Chow teaches the image-guided tool of claim 1 (as shown above). Buch further discloses wherein the Al model computes image features based on a surgical instrument used in the ophthalmic surgical procedure ([0036]-[0037] “In step 202, the neural network may receive a live feed of video images from a surgery in progress. For example, trained neural network 102 may be provided with live video feed from a surgery. In step 204, trained neural network 102 identifies anatomical objects, surgical objects, and tissue manipulations from the live video feed from the surgery and outputs classifications of the anatomical objects, surgical objects, and tissue manipulations., [0033] “Surgeon-specific metrics could include movement efficiency metrics based on, for example, the amount of time a particular instrument was used for a given step in the procedure.”, [0022] “The AOA will also be able to caution surgeons if they are approaching structures designated as “do not manipulate,” such as spinal cord during intradural spine surgery or the posterior capsule of the lens in cataract surgery.”, Examiner notes that the artificial operative assistant disclose in the prior art reference is used in cataract surgery and thus the other limitations presented in this prior art reference in regards to the AOA can be utilized in cataract surgery.). Alternatively, Chow teaches wherein the AI model computes image features based on a surgical instrument used in the ophthalmic surgical procedure (Column 2, lines 27-35: “The computer-vision processing system can train the machine-learning model using machine-learning or artificial intelligence techniques (described in greater detail herein). For example, the computer-vision processing system can store a data set of sample images of surgical tools. The machine-learning or artificial intelligence techniques can be applied to the data set of sample images to train the machine-learning model to recognize patterns and classify objects within the images.”, Column 8, line 51-Column 9, line 4: “The detected patterns can be used to define a model that can be used to recognize objects, such as surgical tools, within the sample images. As a non-limiting example, a deep residual network (ResNet) may be used to classify surgical tools or anatomical structures from image pixels of a live video feed”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bunch to incorporate the teachings of Chow to have the AI model computes image features based on a surgical instrument used in the ophthalmic surgical procedure, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 4, Buch in view of Chow teaches the image-guided tool of claim 3 (as shown above). Buch further discloses wherein the Al model classifies the phase of the ophthalmic surgical procedure being performed based on the visual images of the surgical instrument used in the surgical field ([0023] “chronological and positional relationships between anatomical objects, surgical objects, and tissue manipulation over the course of a procedure will be entrained by the network. Quantifying these relationships will enable real-time object tracking and feedback for surgeons regarding normal stepwise procedure flow (chronological relationships) and upcoming “hidden” objects (positional relationships)”). Alternatively, Chow teaches wherein the AI model classified the phase of the ophthalmic surgical procedure being performed based on the visual images of the surgical instrument used in the surgical field (Column 9, lines 5-17: “The trained machine-learning model can then be used in real-time to process one or more data streams (e.g., video streams, audio streams, image data, haptic feedback streams from a laparoscopic surgical tool, etc.). The processing can include (for example) recognizing and classifying one or more features from the one or more data streams, which can be used to interpret whether or not a surgical tool is within the field of view of the camera. Further, the feature(s) can then be used to identify a presence, position and/or use of one or more objects (e.g., surgical tool or anatomical structure), identify a stage or phase within a workflow (e.g., as represented via a surgical data structure), predict a future stage within a workflow, and other suitable features.”, Column 16, lines 12-15: “the recognition of a surgical tool by the computer-vision processing system can be used to interpret which stage (or phase) of a multistage or sequential-phase procedure is being performed at a given moment.”) It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bunch to incorporate the teachings of Chow to have the AI model classified the phase of the ophthalmic surgical procedure being performed based on the visual images of the surgical instrument used in the surgical field, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 5, Buch in view of Chow teaches the image-guided tool of claim 4 (as shown above). Buch further discloses the tool further comprising: constructing the augmented visual images further based on the phase of the surgical procedure computed by the Al model ([0036]-[0037] “In step 202, the neural network may receive a live feed of video images from a surgery in progress. For example, trained neural network 102 may be provided with live video feed from a surgery. In step 204, trained neural network 102 identifies anatomical objects, surgical objects, and tissue manipulations from the live video feed from the surgery and outputs classifications of the anatomical objects, surgical objects, and tissue manipulations., [0033], Claim 9: “the surgical guidance includes overlaying the surgical guidance on the live feed of video images or onto a surgical field using augmented reality”, Figure 6A-6B, [0033] “algorithms that process and display output from neural network 102 in surgery and surgeon-specific manners may be used in combination with trained neural network 102 to provide enhanced surgical guidance. This can include a surgical roadmap with suggested next steps at each time point based on chronologically processed data for each surgery type”); and Displaying the augmented visual images on the display devices ([0005] “outputting, in real time, by audio and video-overlay means, tailored surgical guidance based on the classified at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video images”, [0020] “on a monitor or, in the future, a heads-up augmented reality display”, [0035] “may output the surgical guidance on a video screen that surgeons use during image guided surgery”) Alternatively, Chow discloses constructing the augmented visual images further based on the phase of the ophthalmic surgical procedure computed by the AI model; and displaying the augmented visual images on the display device (Column 13, lines 42-55: “Augmentor 175 can send the graphics and/or text and/or any positioning information to an augmented reality device (not shown), which can integrate the (e.g., digital) graphics and/or text with a user's environment in real time. The augmented reality device can (for example) include a pair of goggles that can be worn by a person participating in part of the procedure. It will be appreciated that, in some instances, the augmented display can be presented at a non-wearable user device, such as at a computer or tablet. The augmented reality device can present the graphics and/or text at a position as identified by augmentor 175 and/or at a predefined position. Thus, a user can maintain real-time view of procedural operations and further view pertinent state-related information.”, Column 23, lines 1-7: “Various peripheral devices can further be provided, such as conventional displays 630, transparent displays that may be held between the surgeon and patient, ambient lighting 632, one or more operating room cameras 634, one or more operating room microphones 636, speakers 640 and procedural step notification screens placed outside the operating room to alert entrants of critical steps taking place”, Column 12, lines 18-40). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bunch to incorporate the teachings of Chow for constructing the augmented visual images further based on the phase of the ophthalmic surgical procedure computed by the AI model; and displaying the augmented visual images on the display device, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 6, Buch in view of Chow teaches the image-guided tool of claim 4 (as shown above). Buch further discloses wherein the Al model provides feedback signals to an auditory device ([0017] “results returned to the OR…through a boom audio”, [0020] “This output, namely the identified key elements of the surgical field, can then be returned to the surgeon in audiovisual from on a monitor or, in the future, a heads-up augmented reality display in real-time”), the auditory device providing an audio warning when the surgical instrument approaches or deviates into a particular location or plane during the surgical procedure ([0032] “when the pointer is brought in close proximity to bone, surgical guidance generator 104 may generate output…as an auditory signal saying the word “bone”, during the surgery”). Regarding claim 10, Buch in view of Chow teaches the image guided tool of claim 2 (as shown above). Buch further discloses wherein the Al model is a region- based convolutional neural network (R-CNN) ([0040], [0042] “supervised training of the RCNN”). Regarding claim 11, Buch in view of Chow teaches the image guided tool of claim 2 (as shown above). Buch further discloses wherein the Al model is a segmentation network (SN) ([0036] “segmentation algorithms that identify anatomical objects, surgical objects, and tissue manipulations in the video images”). Regarding claim 12, Buch in view of Chow teaches the image guided-tool of claim 2 (as shown above). Buch further discloses wherein the display device is a surgical microscope ([0018] “a video feed of the operative field can be obtained with…a microscope”) . Regarding claim 13, Buch in view of Chow teaches the image guided-tool of claim 2 (as shown above). Buch further discloses wherein the display device is a display monitor or an augmented reality headset ([0020] “This output, namely the identified key elements of the surgical field, can then be returned to the surgeon in audiovisual from on a monitor or, in the future, a heads-up augmented reality display in real-time”). Regarding claim 14, Buch discloses a method for performing surgical procedures using an image-guided tool (Abstract: “a method for generating and providing artificial intelligence assisted surgical guidance”), the method comprising: receiving in real-time visual images from an imaging system of a surgical field during an ophthalmic surgical procedure ([0005] “The method subsequently includes receiving, by the neural network, a live feed of video images from the surgery”, [0022] “The AOA will also be able to caution surgeons if they are approaching structures designated as “do not manipulate,” such as spinal cord during intradural spine surgery or the posterior capsule of the lens in cataract surgery.”, Examiner notes that the artificial operative assistant disclose in the prior art reference is used in cataract surgery and thus the other limitations presented in this prior art reference in regards to the AOA can be utilized in cataract surgery); extracting regions of interest in the surgical field using information provided by an artificial intelligence (AI) model ([0005] “The method further includes classifying, by the neural network, at least one of anatomical objects, surgical objects, and tissue manipulation in the live feed of video images.”); selecting a region of interest computed by the AI model ([0042] “A flow warping algorithm calculates the movement trajectory of each segmented object from the preceding frames to guide the RPN in choosing regions of interest.”); developing by the AI model selected image features based on the surgical instrument used in the region of interest and classifying a phase of the ophthalmic procedure being performed ([0039] “The segmentation data in FIG. 3B may be used to train neural network 100 to identify the anatomical objects (the blood vessel and the spinal cord) and the surgical instrument”, [0023] “chronological and positional relationships between anatomical objects, surgical objects, and tissue manipulation over the course of a procedure will be entrained by the network. Quantifying these relationships will enable real-time object tracking”); constructing augmented visual images that include the real-time visual images, the image features developed by the AI model, and surgical phase information ([0036]-[0037] “In step 202, the neural network may receive a live feed of video images from a surgery in progress. For example, trained neural network 102 may be provided with live video feed from a surgery. In step 204, trained neural network 102 identifies anatomical objects, surgical objects, and tissue manipulations from the live video feed from the surgery and outputs classifications of the anatomical objects, surgical objects, and tissue manipulations., [0033]) Buch fails to disclose based on the augmented visual images, automatically adjusting an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure. However, Chow teaches a method and system for controlling (or facilitating control of) surgical tools during surgical procedures (Column 1, lines 8-10) wherein the processor (computer-vision processing system 110): receiving in real-time visual images from an imaging system of a surgical field during an ophthalmic surgical procedure (view Figure 1: real-time data collection system 145, Column 12, lines 1-6: “The image data can be received from a real-time data collection system 145, which can include (for example) one or more devices (e.g., cameras) located within an operating room and/or streaming live imaging data collected during performance of a procedure.”, Column 4, lines 23-33: phacoemulsification device); developing by the AI model selected image features based on a surgical instrument used on the region of interest and classifying a phase of the ophthalmic procedure being performed (Column 12, lines 7-24: “The machine-learning model can be configured to detect and/or characterize objects within the image data. The detection and/or characterization can include segmenting the image(s). In some instances, the machine-learning model includes or is associated with a preprocessing (e.g., intensity normalization, resizing, etc.) that is performed prior to segmenting the image(s). An output of the machine-learning model can include image-segmentation data that indicates which (if any) of a defined set of objects are detected within the image data, a location and/or position of the object(s) within the image data, and/or state of the object. State detector 150 can use the output from execution of the configured machine-learning model to identify a state within a surgical procedure that is then estimated to correspond with the processed image data. Procedural tracking data structure 155 can identify a set of potential states that can correspond to part of a performance of a specific type of procedure.”, Column 9, lines 5-17: “The trained machine-learning model can then be used in real-time to process one or more data streams (e.g., video streams, audio streams, image data, haptic feedback streams from a laparoscopic surgical tool, etc.). The processing can include (for example) recognizing and classifying one or more features from the one or more data streams, which can be used to interpret whether or not a surgical tool is within the field of view of the camera. Further, the feature(s) can then be used to identify a presence, position and/or use of one or more objects (e.g., surgical tool or anatomical structure), identify a stage or phase within a workflow (e.g., as represented via a surgical data structure), predict a future stage within a workflow, and other suitable features.”); constructing augmented visual images that include the real-time visual images, the image features developed by the AI model, and surgical phase information (Column 13, lines 29-35 :” Output generator 160 can also include an augmentor 175 that generates or retrieves one or more graphics and/or text to be visually presented on (e.g., overlaid on) or near (e.g., presented underneath or adjacent to) real-time capture of a procedure. Augmentor 175 can further identify where the graphics and/or text are to be presented (e.g., within a specified size of a display).”, Column 13, lines 42-55, Column 2, lines 31-38: artificial intelligence model used to analyze video feed, Column 23, lines 1-9: “Various peripheral devices can further be provided, such as conventional displays 630, transparent displays that may be held between the surgeon and patient, ambient lighting 632, one or more operating room cameras 634, one or more operating room microphones 636, speakers 640 and procedural step notification screens placed outside the operating room to alert entrants of critical steps taking place. These peripheral components can function to provide, for example, state-related information.”, Column 12, lines 18-40); based on the augmented visual images, automatically adjusting an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure (Column 1, lines 47-58: "The camera is configured to capture live video within a field of view. The live video feed generated by the camera can be fed into the trained machine-learning model. The machine-learning model is trained, and thus, is configured to recognize patterns or classify objects within image frames of the live video feed. A procedural control system may communicate with the computer-vision processing system to control (or facilitate control of) the surgical tools based on the recognized patterns or classified objects that are outputted from the machine-learning model (e.g., the output being a result of processing the live video feed using the trained machine-learning model)", Column 2, lines 39-47: "In some implementations, the control (or facilitated control) of a surgical tool may be automatic", Column 4, lines 23-33: "For example, if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.", Column 4, lines 34-61). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Buch to incorporate the teachings of Chow for having based on the augmented visual images automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 15, Buch in view of Chow teaches the method of claim 14 (as shown above). Buch further discloses wherein the AI model provides feedback signals to an auditory device ([0017] “results returned to the OR…through a boom audio”, [0020] “This output, namely the identified key elements of the surgical field, can then be returned to the surgeon in audiovisual from on a monitor or, in the future, a heads-up augmented reality display in real-time”) the method further comprising: producing an audio warning by the auditory device when the surgical instrument approaches or deviates into a particular location or plane during the surgical procedure ([0032] “when the pointer is brought in close proximity to bone, surgical guidance generator 104 may generate output…as an auditory signal saying the word “bone”, during the surgery”). Regarding claim 18, Buch in view of Chow teaches the method of claim 14 (as shown above). Buch further discloses wherein the AI model is a region-based convolutional neural network (R-CNN) ([0040], [0042] “supervised training of the RCNN”), the R-CNN operates to: find regions in the visual images that may contain an object and provide region proposals ([0042] “A region proposal network (RPN) receives image data from the backbone and outputs proposed regions of interest to inform object segmentation.”); extract convoluted neural network features from the region proposals ([0042] “The mask heads create proposed labels for anatomical objects in the image to be classified”); classify the object using the extracted features ([0042] “labels for anatomical objects in the image to be classified… classified anatomical structures and surgical instruments. The RCNN also outputs a loss indicative of a difference between the network-classified and manually classified image”); and construct augmented visual images of the surgical field using the objects classified from the extracted features ([0042] “The RCNN produces a surgical image with instance segmentations around classified anatomical structures and surgical instruments.”, Claim 9: “outputting the surgical guidance includes overlaying the surgical guidance on the live feed of video images or onto a surgical field using augmented reality”, Figure 6A-6B). Regarding claim 19, Buch in view of Chow teaches the method of claim 14 (as shown above). Buch further discloses wherein the AI model is a segmentation network (SN) ([0036] “segmentation algorithms that identify anatomical objects, surgical objects, and tissue manipulations in the video images.”), the SN operates to: identify data sets of label images sampled from a training set of ophthalmic surgical procedures ([0019] “we will perform video processing on readily available microscopic and endoscopic recordings of cranial and spinal neurosurgical cases. Off-line image segmentation and labeling will be performed to break down components of representative video frames into matrices containing pixel-by-pixel categorizations of select anatomical and surgical objects.”); creating an accurate profile of the surgical instruments and their usage for the ophthalmic surgical procedure ([0019] “pixel-by-pixel categorizations of select anatomical and surgical objects…A few key categories can be identified in the microscopic recording of the operation including bone, ligament, thecal sac, nerve root, and disc. Images from this procedure will be labeled with these defined structures and then will be fed into our DNN”); develop class labels within the ocular surgical field for the anatomical structures, tissue boundaries and tools (Figure 3B, [0036] “segmentation algorithms that identify anatomical objects, surgical objects, and tissue manipulations in the video images.”); identify through deep learning a training set of ophthalmic surgical procedures ([0020] “Once initialized, this network will be trained using hundreds of these de-identified surgical videos in which the same key structures have been segmented out. Over multiple iterations, the deep neural network will coalesce classifiers for each pixel designating the anatomical or surgical object class to which it belongs.”); and construct augmented visual images of the surgical field using the objects classified from the data sets of label images ([0020] “Once this classifier can be verified for accuracy, it can be implemented during novel video capture. This output, namely the identified key elements of the surgical field, can then be returned to the surgeon in audiovisual from on a monitor or, in the future, a heads-up augmented reality display in real-time. This will appear as segmented overlays over the specific anatomic structures and provide probability estimates for each item to demonstrate the AOA's level of certainty (or uncertainty).”). Regarding claim 20, Buch discloses a non-transitory computer readable medium containing instructions that when executed by at least one processing device, cause the at least one processing device to (Claim 21: “A non-transitory computer readable medium having stored thereon executable instructions that when executed by a processor of a computer control the computer to perform steps comprising”): receive in real-time visual images from an imaging system of a surgical field during an ophthalmic surgical procedure (Claim 21: “receiving, by the neural network, a live feed of video images from a surgery”, [0018] “a video feed of the operative field can be obtained with either an overhead operating room (OR) camera, microscope, or endoscope”, [0022] “The AOA will also be able to caution surgeons if they are approaching structures designated as “do not manipulate,” such as spinal cord during intradural spine surgery or the posterior capsule of the lens in cataract surgery.”, Examiner notes that the artificial operative assistant disclose in the prior art reference is used in cataract surgery and thus the other limitations presented in this prior art reference in regards to the AOA can be utilized in cataract surgery); extract regions of interest in the surgical field using information provided by an artificial intelligence (AI) model (Claim 21: “classifying, by the neural network, at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video images”); select a region of interest computed by the AI model ([0042] “A flow warping algorithm calculates the movement trajectory of each segmented object from the preceding frames to guide the RPN in choosing regions of interest.”); develop by the AI model selected image features based on the surgical instruments used in the region of interest and classify a phase of the ophthalmic surgical procedure being performed ([0039] “The segmentation data in FIG. 3B may be used to train neural network 100 to identify the anatomical objects (the blood vessel and the spinal cord) and the surgical instrument”, [0023] “chronological and positional relationships between anatomical objects, surgical objects, and tissue manipulation over the course of a procedure will be entrained by the network. Quantifying these relationships will enable real-time object tracking”); construct augmented visual images that include the real-time visual images, the image features developed by the AI model, and surgical phase information ([0036]-[0037] “In step 202, the neural network may receive a live feed of video images from a surgery in progress. For example, trained neural network 102 may be provided with live video feed from a surgery. In step 204, trained neural network 102 identifies anatomical objects, surgical objects, and tissue manipulations from the live video feed from the surgery and outputs classifications of the anatomical objects, surgical objects, and tissue manipulations., [0033]) Buch fails to disclose based on the augmented visual images, automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure. However, Chow teaches a method and system for controlling (or facilitating control of) surgical tools during surgical procedures (Column 1, lines 8-10) wherein the processor (computer-vision processing system 110): receive in real-time visual images from an imaging system of a surgical field during an ophthalmic surgical procedure (view Figure 1: real-time data collection system 145, Column 12, lines 1-6: “The image data can be received from a real-time data collection system 145, which can include (for example) one or more devices (e.g., cameras) located within an operating room and/or streaming live imaging data collected during performance of a procedure.”, Column 4, lines 23-33: phacoemulsification device, Column 16, lines 27-31: “. The images may be captured using any image capturing device (e.g., a digital camera, a headset comprising a camera, a video camera, microscopes (e.g., for eye surgeries), and other suitable image capturing devices)”); develop by the AI model selected image features based on the surgical instruments used in the region of interest and classify a phase of the ophthalmic surgical procedure being performed (Column 12, lines 7-24: “The machine-learning model can be configured to detect and/or characterize objects within the image data. The detection and/or characterization can include segmenting the image(s). In some instances, the machine-learning model includes or is associated with a preprocessing (e.g., intensity normalization, resizing, etc.) that is performed prior to segmenting the image(s). An output of the machine-learning model can include image-segmentation data that indicates which (if any) of a defined set of objects are detected within the image data, a location and/or position of the object(s) within the image data, and/or state of the object. State detector 150 can use the output from execution of the configured machine-learning model to identify a state within a surgical procedure that is then estimated to correspond with the processed image data. Procedural tracking data structure 155 can identify a set of potential states that can correspond to part of a performance of a specific type of procedure.” Column 9, lines 5-17: “The trained machine-learning model can then be used in real-time to process one or more data streams (e.g., video streams, audio streams, image data, haptic feedback streams from a laparoscopic surgical tool, etc.). The processing can include (for example) recognizing and classifying one or more features from the one or more data streams, which can be used to interpret whether or not a surgical tool is within the field of view of the camera. Further, the feature(s) can then be used to identify a presence, position and/or use of one or more objects (e.g., surgical tool or anatomical structure), identify a stage or phase within a workflow (e.g., as represented via a surgical data structure), predict a future stage within a workflow, and other suitable features.”); construct augmented visual images that include the real-time visual images, the image features developed by the AI model, and surgical phase information (Column 13, lines 29-35 :” Output generator 160 can also include an augmentor 175 that generates or retrieves one or more graphics and/or text to be visually presented on (e.g., overlaid on) or near (e.g., presented underneath or adjacent to) real-time capture of a procedure. Augmentor 175 can further identify where the graphics and/or text are to be presented (e.g., within a specified size of a display).”, Column 13, lines 42-55, Column 2, lines 31-38: artificial intelligence model used to analyze video feed, Column 23, lines 1-9: “Various peripheral devices can further be provided, such as conventional displays 630, transparent displays that may be held between the surgeon and patient, ambient lighting 632, one or more operating room cameras 634, one or more operating room microphones 636, speakers 640 and procedural step notification screens placed outside the operating room to alert entrants of critical steps taking place. These peripheral components can function to provide, for example, state-related information.”, Column 12, lines 18-40); based on the augmented visual images, automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure (Column 1, lines 47-58: "The camera is configured to capture live video within a field of view. The live video feed generated by the camera can be fed into the trained machine-learning model. The machine-learning model is trained, and thus, is configured to recognize patterns or classify objects within image frames of the live video feed. A procedural control system may communicate with the computer-vision processing system to control (or facilitate control of) the surgical tools based on the recognized patterns or classified objects that are outputted from the machine-learning model (e.g., the output being a result of processing the live video feed using the trained machine-learning model)", Column 2, lines 39-47: "In some implementations, the control (or facilitated control) of a surgical tool may be automatic", Column 4, lines 23-33: "For example, if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.", Column 4, lines 34-61). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the Buch to incorporate the teachings of Chow to have based on the augmented visual images automatically adjust an operating parameter of a surgical instrument in use during the ophthalmic surgical procedure, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 21, Buch in view of Chow teaches the image-guided tool of claim 1 (as shown above). Buch further discloses the instructions further causing the processor to: construct the augmented visual images further based on the image features, the surgical phase of the surgical procedure being performed, a position of a surgical instrument used in the surgical procedure, or a combination thereof ([0036]-[0037] “In step 202, the neural network may receive a live feed of video images from a surgery in progress. For example, trained neural network 102 may be provided with live video feed from a surgery. In step 204, trained neural network 102 identifies anatomical objects, surgical objects, and tissue manipulations from the live video feed from the surgery and outputs classifications of the anatomical objects, surgical objects, and tissue manipulations., [0033]); and display the augmented visual images on the display device ([0005] “outputting, in real time, by audio and video-overlay means, tailored surgical guidance based on the classified at least one of anatomical objects, surgical objects, and tissue manipulations in the live feed of video images”, [0020] “on a monitor or, in the future, a heads-up augmented reality display”, [0035] “may output the surgical guidance on a video screen that surgeons use during image guided surgery”). Alternatively, Chow discloses construct the augmented visual images further based on the image features, the surgical phase of the surgical procedure being performed, a position of a surgical instrument used in the surgical procedure, or a combination thereof; and display the augmented visual images on the display device (Column 13, lines 42-55: “Augmentor 175 can send the graphics and/or text and/or any positioning information to an augmented reality device (not shown), which can integrate the (e.g., digital) graphics and/or text with a user's environment in real time. The augmented reality device can (for example) include a pair of goggles that can be worn by a person participating in part of the procedure. It will be appreciated that, in some instances, the augmented display can be presented at a non-wearable user device, such as at a computer or tablet. The augmented reality device can present the graphics and/or text at a position as identified by augmentor 175 and/or at a predefined position. Thus, a user can maintain real-time view of procedural operations and further view pertinent state-related information.”, Column 23, lines 1-7: “Various peripheral devices can further be provided, such as conventional displays 630, transparent displays that may be held between the surgeon and patient, ambient lighting 632, one or more operating room cameras 634, one or more operating room microphones 636, speakers 640 and procedural step notification screens placed outside the operating room to alert entrants of critical steps taking place”, Column 12, lines 18-40). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Bunch to incorporate the teachings of Chow for constructing the augmented visual images further based on the phase of the ophthalmic surgical procedure computed by the AI model; and displaying the augmented visual images on the display device, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Regarding claim 22, Buch in view of Chow teaches the image-guided tool of claim 1 (as shown above). Buch fails to disclose wherein the operating parameter of the surgical instrument comprises one or more selected from a group consisting of: a power of an ultrasonic phacoemulsification probe; a power driving an ultrasonic instrument; a vacuum level; and a placement of the surgical instrument. However, Chow teaches wherein the operating parameter of the surgical instrument comprises one or more selected from a group consisting of: a power of an ultrasonic phacoemulsification probe (Column 4, lines 24-33: “if the surgical tool is a phacoemulsification device, the computer-vision processing system may detect whether the device is too close to an iris (e.g., within a threshold distance) based on a comparison of the distance of the device to the anatomical structure and a threshold distance. If the device is detected as being too close to the iris (e.g., within the threshold distance), then the computer-vision processing system can generate an output that, when received at the procedural control system, causes the phacoemulsification device to cease operation.”, Examiner notes that although power is not specifically mentioned, it would be obvious to one skilled in the art that a “cease” of operation would result in “no power” being applied., Column 4, lines 34-38: “the computer-vision processing system may be configured to recognize an action occurring within the field of view of the camera. Upon detecting the action, the computer-vision processing system can cause auxiliary surgical tools to be enabled or disabled.”, Column 4, lines 45-49: “The computer-vision processing system can then regulate the magnitude of the energy provided to the surgical tool depending on the proximity of the surgical tool to the critical structure and the surgical tool.”), a vacuum level (Column 18, lines 24-41: “the functionality of the surgical tool may be controlled in response to an output signal from the procedural control system of the computer-vision processing system. Non-limiting examples of controlling the functionality of the surgical tool may include… adjusting the magnitude of the function (e.g., increasing or decreasing a vacuum pressure, but not enabling or disabling the vacuum function altogether)…”); and a placement of the surgical instrument (Column 18, lines 24-41: “the functionality of the surgical tool may be controlled in response to an output signal from the procedural control system of the computer-vision processing system. Non-limiting examples of controlling the functionality of the surgical tool may include… adjusting a position or setting of a surgical tool…”, Column 18, lines 48-53: “It will also be appreciated that the position of or a physical setting of a surgical tool may be modified or adjusted based on an output of the machine-learning model. For example, the angle of the cutting arms of a laparoscopic scissor may be adjusted towards or away from a detected anatomical structure.”). It would have been prima facie obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Buch to incorporate the teachings of Chow to have the operating parameter of the surgical instrument comprises one or more selected from a group consisting of: a power of an ultrasonic phacoemulsification probe; a power driving an ultrasonic instrument; a vacuum level; and a placement of the surgical instrument, as these prior art references are directed to controlling/guiding surgical procedures based on a live video feed using AI (Column 2, lines 24-38). One would be motivated to do this is to improve the safety and reliability of surgeries, as recognized by Chow (Column 1, lines 10-14). Claim(s) 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claims 4 and 14 above, and further in view of Panescu et al. (US 2017/0181808 A1, previously cited), hereinafter Panescu. Regarding claim 7 and 16, Buch in view of Chow teaches the image-guided tool of claim 4 and the method of claim 14 (as shown above). Buch and Chow, alone or in combination, are silent regarding wherein the Al model provides haptic feedback signals to a haptic device, the haptic device vibrating the surgical instrument when the surgical instrument approaches or deviates into a particular location or plane during the surgical procedure or the method further comprising: vibrating the surgical instrument when the surgical instrument approaches or deviates into a particular location or plane during the surgical procedure . However, Panescu discloses a surgical system with haptic feedback which results in the vibrating of the surgical instrument when the surgical instrument approaches or deviates into a particular location or plane during the surgical procedure ([0153] “In alternative embodiments, the operating surgeon may be provided with haptic feedback that vibrates or provides manipulating resistance to the control input device 160 of FIG. 5. The amount of vibration or manipulating resistance would be modulated according to the magnitude of tissue displacement u, or tissue force f”). It would have been prima facia obvious for one of ordinary skill int eh art before the effective filing data of the claimed invention to have modified the tool and method of Buch and Chow to incorporate the teachings of Panescu to have the haptic device vibrating the surgical instrument when the surgical instrument approaches or deviates into a particular location or plane during the surgical procedure, as these prior art references and the instant application are directed to surgical systems. One would be motivated to do this alarm the surgeon that a proximity threshold has been crossed, as recognized by Panescu ([0122]). Claim(s) 8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Buch in view of Chow as applied to claims 4 and 14 above, and further in view of Beelen at al. (US Patent 10,350,014 B2, previously cited), hereinafter Beelan. Regarding claim 8 and 17, Buch in view of Chow teaches the image-guided tool of claim 4 and the method of claim 14 (as shown above). Buch and Chow, alone or in combination, fail to explicitly teach wherein the surgical instrument is robotically manipulated and the Al model provides feedback signals to the robotic surgical instrument, wherein the robotic surgical instrument is automatically retracted from the surgical field when the robotic surgical instrument approaches or deviates into a particular location or plane during the surgical procedure or the method further comprising: automatically retracting the robotic surgical instrument from the surgical field when the robotic surgical instrument approaches or deviates into a particular location or plane during the surgical procedure. However, Beelen discloses a surgical robotic system for use in a surgical procedure with a movable arm part for mounting a surgical instrument (Abstract) wherein “the surgical instrument being retracted in longitudinal direction when non-longitudinal movement 107 of the instrument causes the instrument to arrive at the virtual bound… when the human operator provides a positioning command that would result in an instrument movement past the virtual bound 132, the positioning commands may be processed such that the instrument does not pass the virtual bound” (Column 14, lines 50-61, Figure 20). It would have been prima facia obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the tool and method of Buch and Chow to incorporate the teachings of Beelen to have the surgical instrument be robotically manipulated and the Al model provides feedback signals to the robotic surgical instrument, wherein the robotic surgical instrument is automatically retracted from the surgical field when the robotic surgical instrument approaches or deviates into a particular location or plane during the surgical procedure, as these prior art references and the instant application are directed to surgical instrument. One would be motivated to do this to prevent accidental damage to delicate tissue, as recognized by Beelen (Column 14, lines 62). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Papac (US Patent 10,426,339 B2) discloses an ophthalmic surgical system wherein the display device is a surgical microscope (Abstract, Figure 4). Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATTIYA SAYYADA HUSSAINI whose telephone number is (703)756-5921. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Niketa Patel can be reached at 5712724156. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ATTIYA SAYYADA HUSSAINI/Examiner, Art Unit 3792 /NIKETA PATEL/Supervisory Patent Examiner, Art Unit 3792
Read full office action

Prosecution Timeline

May 02, 2022
Application Filed
Feb 14, 2025
Non-Final Rejection — §103, §112
Aug 26, 2025
Response Filed
Sep 11, 2025
Final Rejection — §103, §112
Jan 07, 2026
Examiner Interview Summary
Jan 07, 2026
Applicant Interview (Telephonic)
Feb 17, 2026
Request for Continued Examination
Mar 09, 2026
Non-Final Rejection — §103, §112
Mar 09, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12582315
ELECTROCARDIOGRAM ANALYSIS APPARATUS, ELECTROCARDIOGRAM ANALYZING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12558558
PORTABLE MEDICAL TREATMENT APPARATUS WITH INTERACTIVE GUIDANCE AND CARDIOPULMONARY RESUSCITATIVE FUNCTIONALITY
2y 5m to grant Granted Feb 24, 2026
Patent 12551703
Adaptive Deep Brain Stimulation Based on Neural Signals with Dynamics
2y 5m to grant Granted Feb 17, 2026
Patent 12478799
Non-Invasive Multi-Wavelength Laser Cancer Treatment
2y 5m to grant Granted Nov 25, 2025
Patent 12415090
LASER IRRADIATION DEVICE
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
64%
With Interview (+12.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month