Prosecution Insights
Last updated: April 19, 2026
Application No. 18/622,860

SYSTEM OF REAL-TIME DISPLAYING PROMPT IN SYNCHRONOUS DISPLAYED SURGICAL OPERATION VIDEO AND METHOD THEREOF

Non-Final OA §103
Filed
Mar 29, 2024
Examiner
BOICE, JAMES EDWARD
Art Unit
3795
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Smart Surgery
OA Round
1 (Non-Final)
79%
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant
89%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
94 granted / 119 resolved
+9.0% vs TC avg
Moderate +10% lift
Without
With
+10.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
56 currently pending
Career history
175
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
20.7%
-19.3% vs TC avg
§112
17.6%
-22.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 119 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy of priority document TW112112567 has been received. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are found in Claim 7, and include: an image processing module; a message obtaining module; a target determining module; a position determining module; and a label generating module. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. More specifically: an image processing module is interpreted as image processing module 160 in FIG. 1B of the present patent application; a message obtaining module is interpreted as message obtaining module 120 in FIG. 1B of the present patent application; a target determining module is interpreted as target determining module 130 in FIG. 1B of the present patent application; a position determining module is interpreted as position determining module 140 in FIG. 1B of the present patent application; and a label generating module is interpreted as label generating module 150 in FIG. 1B of the present patent application. For purposes of examination, these modules are interpreted as executable software and/or executable firmware and/or hardware subcomponents of the processing module 101 shown in FIG. 1B of the present patent application. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The present rejection(s) reference specific passages from cited prior art. However, Applicant is advised that the rejections are based on the entirety of each cited prior art. That is, each cited prior art reference “must be considered in its entirety”. Therefore, Applicant is advised to review all portions of the cited prior art if traversing a rejection based on the cited prior art. Claims 1-5 and 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Shioda et al. (US PGPUB 2017/0289528 – “Shioda”) in view of Amanatullah (US PGPUB 2019/0231433 – “Amanatullah”). Regarding Claim 1, Shioda discloses: A method of real-time displaying a synchronous displayed surgical operation video (Shioda FIG. 2, combination processing section 25; Shioda paragraph [0072], “process by the combination processing section 25, a process in the case of enabling the user to observe a 3D on the basis of glasses-free 3D picture technology”) applicable to a device (Shioda FIG. 2, medical stereoscopic observation device 9), and the method comprising: during a surgery, synchronously capturing two two-dimensional (2D) surgical operation videos from different viewing angles (Shioda FIG. 2, first imaging section 91a and second imaging section 91b; Shioda paragraph [0064], “first imaging section 91a and the second imaging section 91b capture an imaging target from mutually different viewpoints”), by the device (Shioda FIG. 2, medical stereoscopic observation device 9); generating a naked eye three-dimensional (3D) video corresponding to a 3D display based on the two 2D surgical operation videos in real time (Shioda FIG. 2, combination processing section 25; Shioda paragraph [0072], “combination processing section 25, a process in the case of enabling the user to observe a 3D on the basis of glasses-free 3D picture technology”), by the device; using the 3D display to synchronously project the two 2D surgical operation videos to left and right eyes of a viewer based on the naked eye 3D video, respectively, to make the viewer watch a 3D surgical operation video (Shioda FIG. 1, display device 550; Shioda paragraph [0072], “combination processing section 25 generates a desired multiview image presented on the basis of the computed results of the parallax value by the viewpoint images observed by the user's left and right eyes, respectively, and causes the generated multiview image to be displayed on a certain display device (for example, the display device 550 illustrated in FIG. 1)), by the device; Shioda does not explicitly disclose: A method of real-time displaying a prompt, the method comprising: obtaining an instruction message, by the device; determining a target part related to the instruction message, by the device; determining a label position of the target part in each of the two 2D surgical operation videos based on feature data of the target part, by the device; and generating a prompt corresponding to the target part in the two 2D surgical operation videos based on the label positions, to make the prompt be displayed in the 3D surgical operation video watched by the viewer, by the device. Amanatullah teaches: A method of real-time displaying a prompt, the method comprising: obtaining an instruction message (Amanatullah FIG. 1A, 3D virtual patient model with structural labels; Amanatullah paragraph [0035], “write an anatomical tissue label to each distinct tissue mass in the 3D point cloud based on anatomical tissue labels manually entered or selected by the surgeon through the physician portal”), by the device; determining a target part related to the instruction message, by the device (Amanatullah paragraph [0035], “the computer system can implement template matching techniques to match template tissue point clouds—labeled with one or more anatomical tissue labels—to tissue masses identified in the 3D point cloud and transfer anatomical tissue labels from matched template tissue point clouds to corresponding tissue masses in the 3D point cloud”); determining a label position of the target part in each of the two 2D surgical operation videos based on feature data of the target part (Amanatullah paragraph [0050], “the computer system can: transform 2D optical scans captured by cameras within the operating room into a 3D surgical field image”; Examiner interprets Amanatullah’s teaching that the 3D images are generated from 2D optical scans as teaching that the labels are on each of the two 2D images/videos), by the device; and generating a prompt corresponding to the target part in the two 2D surgical operation videos based on the label positions, to make the prompt be displayed in the 3D surgical operation video watched by the viewer (Amanatullah FIG. 1A, 3D virtual patient model with structural labels), by the device. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Amanatullah’s image labeling with the method disclosed by Shioda. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a method for identifying not only where, but also what, objects in a video are (see Amanatullah paragraphs [0035] and [0051]), in order for the user to fully understand what they are looking at. Regarding Claim 2, Shioda in view of Amanatullah teaches the features of Claim 1, as described above. Amanatullah further teaches displaying a color block label or a text description for indicating the target part (Amanatullah FIG. 1A, “UNRESECTED TISSUE OF INTEREST”) corresponding to the label positions in the two 2D surgical operation videos, by the device, wherein the target part comprises an organ, a tissue (Amanatullah FIG. 1A, “UNRESECTED TISSUE OF INTEREST”), or an instrument. Regarding Claim 3, Shioda in view of Amanatullah teaches the features of Claim 1, as described above. Amanatullah further teaches receiving an instructional voice or detecting an instructional gesture within a surgical range to obtain the instruction message, or generating the instruction message based on an instructional operation for the naked eye 3D video displayed on a screen (Amanatullah paragraph [0035], “the computer system can implement template matching techniques to match template tissue point clouds—labeled with one or more anatomical tissue labels—to tissue masses identified in the 3D point cloud and transfer anatomical tissue labels from matched template tissue point clouds to corresponding tissue masses in the 3D point cloud”), by the device. Regarding Claim 4, Shioda in view of Amanatullah teaches the features of Claim 3, as described above. Amanatullah further teaches: analyzing a content of the instructional voice, or determining a position of the instructional gesture or the instructional operation in the naked eye 3D video, to determine the target part (Amanatullah paragraph [0035], “the computer system can implement template matching techniques to match template tissue point clouds—labeled with one or more anatomical tissue labels—to tissue masses identified in the 3D point cloud and transfer anatomical tissue labels from matched template tissue point clouds to corresponding tissue masses in the 3D point cloud”), by the device. Regarding Claim 5, Shioda in view of Amanatullah teaches the features of Claim 1, as described above. Shioda further discloses using an image capturing device (Shioda FIG. 2, imaging unit 90) with dual camera lenses (Shioda first imaging section 91a and second imaging section 91b; Shioda paragraph [0063], “the optical system includes various types of lenses”) to capture the two 2D surgical operation videos (Shioda FIG. 1, surgical video microscope device 510; Shioda paragraph [0046], “FIG. 1 illustrates an example of a case for an applied example of using a medical stereoscopic observation device according to an embodiment of the present disclosure, in which a surgical video microscope device equipped with an arm is used as the medical stereoscopic observation device.”), by the device. Regarding Claim 7, Shioda discloses: A system of real-time displaying a surgical operation video (Shioda FIG. 2, medical stereoscopic observation device 9), applicable to a device or multiple devices connected to each other, and the system comprising, an image capturing module (Shioda FIG. 2, imaging unit 90), configured to synchronously capture two 2D surgical operation videos from different viewing angles during a surgery (Shioda FIG. 2, first imaging section 91a and second imaging section 91b; Shioda paragraph [0064], “first imaging section 91a and the second imaging section 91b capture an imaging target from mutually different viewpoints”); an image display module (Shioda FIG. 1, display device 550), comprising a 3D display (Shioda paragraph [0056], “a mechanism for enabling the user 520 to observe, as a 3D image, the images displayed on the display device 550 as the left-eye image and the right-eye image”); and a processing module (Shioda FIG. 15, information processing apparatus 900; Shioda paragraph [0194], “an information processing apparatus 900 constituting a medical stereoscopic observation device according to the present embodiment, such as the surgical video microscope device or the image processing device discussed earlier”), connected to the image capturing module (Shioda FIG. 2, imaging unit 90) and the image display module (Shioda FIG. 2, parallax image; Shioda FIG. 1, display device 550; Shioda paragraph [0071], “the combination processing section 25 generates a parallax image having a set parallax enabling the user to observe a 3D image”) and configured to execute the computer readable instructions to generate: an image processing module (Shioda FIG. 2, combination processing section 25), configured to generate a naked eye 3D video corresponding to the 3D display based on the two 2D surgical operation videos in real time, to make the 3D display synchronously project the two 2D surgical operation videos to the left and right eyes of a viewer based on the naked eye 3D video, respectively, to make the viewer watch a 3D surgical operation video (Shioda paragraph [0072], “combination processing section 25, a process in the case of enabling the user to observe a 3D on the basis of glasses-free 3D picture technology”). Shioda does not explicitly disclose: A system for displaying a prompt; a message obtaining module, configured to obtain an instruction message; a target determining module, configured to determine a target part related to the instruction message; a position determining module, configured to determine a label position of the target part in each of the two 2D surgical operation videos based on feature data of the target part; and a label generating module, configured to generate a prompt corresponding to the target part in the two 2D surgical operation videos based on the label positions, to make the prompt be displayed in the 3D surgical operation video watched by the viewer. Amanatullah teaches: A system for displaying a prompt (Amanatullah FIG. 1A, labels on 3D virtual patient model generated by method S100; Amanatullah paragraph [0014], “a computer system can execute Blocks of the method S100 to access and transform scan data of a hard tissue of interest (e.g., bone) of a patient into a virtual patient model representing the hard tissue of interest prior to a surgical operation on the patient”); a message obtaining module (Amanatullah paragraph [0171], computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions”) configured to obtain an instruction message ((Amanatullah FIG. 1A, 3D virtual patient model with structural labels; Amanatullah paragraph [0035], “write an anatomical tissue label to each distinct tissue mass in the 3D point cloud based on anatomical tissue labels manually entered or selected by the surgeon through the physician portal”); a target determining module (Amanatullah paragraph [0171], computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions”), configured to determine a target part related to the instruction message (Amanatullah paragraph [0035], “the computer system can implement template matching techniques to match template tissue point clouds—labeled with one or more anatomical tissue labels—to tissue masses identified in the 3D point cloud and transfer anatomical tissue labels from matched template tissue point clouds to corresponding tissue masses in the 3D point cloud”); a position determining module (Amanatullah paragraph [0171], computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions”), configured to determine a label position of the target part in each of the two 2D surgical operation videos based on feature data of the target part (Amanatullah paragraph [0050], “the computer system can: transform 2D optical scans captured by cameras within the operating room into a 3D surgical field image”; Examiner interprets Amanatullah’s teaching that the 3D images are generated from 2D optical scans as teaching that the labels are on each of the two 2D images/videos); and a label generating module (Amanatullah paragraph [0171], computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions”), configured to generate a prompt corresponding to the target part in the two 2D surgical operation videos based on the label positions, to make the prompt be displayed in the 3D surgical operation video watched by the viewer (Amanatullah FIG. 1A, 3D virtual patient model with structural labels). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Amanatullah’s computer modules, which are created by Amanatullah’s computer/processor, with the device disclosed by Shioda. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a device that identifies not only where, but also what, objects in a video are (see Amanatullah paragraphs [0035] and [0051]), in order for the user to fully understand what they are looking at. Regarding Claim 8, Shioda in view of Amanatullah teaches the features of Claim 7, as described above. Amanatullah further teaches wherein the label generating module displays a color block label or a text description for indicating the target part (Amanatullah FIG. 1A, “UNRESECTED TISSUE OF INTEREST”) corresponding to the label positions in the two 2D surgical operation videos, to generate the prompt, wherein the target part comprises an organ, a tissue (Amanatullah FIG. 1A, “UNRESECTED TISSUE OF INTEREST”), or an instrument. Regarding Claim 9, Shioda in view of Amanatullah teaches the features of Claim 7, as described above. Amanatullah further teaches wherein the message obtaining module receives an instructional voice or detects an instructional gesture within a surgical range to obtain the instruction message, or generate the instruction message based on an instructional operation for the naked eye 3D video displayed on a screen (Amanatullah paragraph [0035], “the computer system can implement template matching techniques to match template tissue point clouds—labeled with one or more anatomical tissue labels—to tissue masses identified in the 3D point cloud and transfer anatomical tissue labels from matched template tissue point clouds to corresponding tissue masses in the 3D point cloud”), and the target determining module analyzes a content of the instructional voice, or determines a position corresponding to the instructional gesture or the instructional operation in the naked eye 3D video, to determine the target part (Amanatullah paragraph [0035], “the computer system can implement template matching techniques to match template tissue point clouds—labeled with one or more anatomical tissue labels—to tissue masses identified in the 3D point cloud and transfer anatomical tissue labels from matched template tissue point clouds to corresponding tissue masses in the 3D point cloud”). Regarding Claim 10, Shioda in view of Amanatullah teaches the features of Claim 7, as described above. Shioda further discloses wherein the image capturing module comprises an image capturing device (Shioda FIG. 2, imaging unit 90) with dual camera lenses (Shioda first imaging section 91a and second imaging section 91b; Shioda paragraph [0063], “the optical system includes various types of lenses”) to capture the two 2D surgical operation videos (Shioda FIG. 1, surgical video microscope device 510; Shioda paragraph [0046], “FIG. 1 illustrates an example of a case for an applied example of using a medical stereoscopic observation device according to an embodiment of the present disclosure, in which a surgical video microscope device equipped with an arm is used as the medical stereoscopic observation device.”) Claims 6 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Shioda et al. (US PGPUB 2017/0289528 – “Shioda”) in view of Amanatullah (US PGPUB 2019/0231433 – “Amanatullah”) and Russell (US PGPUB 2020/0021796 – “Russell”). Regarding Claim 6, Shioda in view of Amanatullah teaches the features of Claim 1, as described above. Shioda in view of Amanatullah does not explicitly teach: detecting dual-eye dynamics and a head movement of the viewer to determine a watch sight of the viewer, and adjusting viewing angles of the 3D display projecting the two 2D surgical operation videos based on the watch sight, by the device. Russell teaches: detecting dual-eye dynamics and a head movement of the viewer to determine a watch sight of the viewer, and adjusting viewing angles of the 3D display projecting the two 2D surgical operation videos based on the watch sight, by the device (Russell FIG. 2, tracking system 230; Russell paragraph [0072], “tracking system 230 may track a location of the user (or location of the eyes or head of the user) with respect to a display device 202, for example, may be used to configure masks for a left and a right eye of the user to display images with proper depth and parallax.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Russell’s user tracking system with the method taught by Shioda in view of Amanatullah. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a method that allows a user to view a 3D display without glasses and with little distortion of the 3D image (see Russell paragraph [0039]). Regarding Claim 11, Shioda in view of Amanatullah teaches the features of Claim 7, as described above. Shioda in view of Amanatullah does not explicitly teach: wherein the processing module comprises a sight detecting module and a viewing angle adjusting module, the sight detecting module detects dual-eye dynamics and a head movement of the viewer to determine a watch sight of the viewer, and the viewing angle adjusting module adjusts viewing angles of the 3D display projecting the two 2D surgical operation videos based on the watch sight. Russell teaches: wherein the processing module comprises a sight detecting module (Amanatullah paragraph [0171], computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions”) and a viewing angle adjusting module (Amanatullah paragraph [0171], computer-executable component can be a processor but any dedicated hardware device can (alternatively or additionally) execute the instructions”), the sight detecting module detects dual-eye dynamics and a head movement of the viewer to determine a watch sight of the viewer, and the viewing angle adjusting module adjusts viewing angles of the 3D display projecting the two 2D surgical operation videos based on the watch sight (Russell FIG. 2, tracking system 230; Russell paragraph [0072], “tracking system 230 may track a location of the user (or location of the eyes or head of the user) with respect to a display device 202, for example, may be used to configure masks for a left and a right eye of the user to display images with proper depth and parallax.”). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine Russell’s user tracking system with the device taught by Shioda in view of Amanatullah. A person having ordinary skill in the art would be motivated to combine these prior art elements according to known methods to yield the predictable result of a device that allows a user to view a 3D display without glasses and with little distortion of the 3D image (see Russell paragraph [0039]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure comprises: Chen et al. (US PGPUB 2020/0100661 – “Chen”), which teaches in Chen FIG. 4 a pair of smart glasses that display a first display on a first lens and a second display on a second lens, in order to generate a 3D image for a user to view; Morris et al. (US PGPUB 2018/0018144 – “Morris”), which teaches in Morris FIG. 3 a 2D image on which a computer provides on-screen labels describing objects on a display; Ishikawa et al. (US PGPUB 2018/0125340 – “Ishikawa”), which teaches in Ishikawa FIG. 3 a system that displays images from two separate imaging sections for display on two display sections, in order to generate a 3D image; and Kuzara et al. (US PGPUB 2006/0174065 – “Kuzara”), which teaches in Kuzara FIG. 1 a display that provides user-selected and/or computer-selected labels for features shown in the display. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIM BOICE whose telephone number is (571)272-6565. The examiner can normally be reached Monday-Friday 9:00am - 5:00pm Eastern. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anhtuan Nguyen can be reached at (571)272-4963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JIM BOICE Examiner Art Unit 3795 /JAMES EDWARD BOICE/Examiner, Art Unit 3795 /ANH TUAN T NGUYEN/Supervisory Patent Examiner, Art Unit 3795 03/28/2026
Read full office action

Prosecution Timeline

Mar 29, 2024
Application Filed
Mar 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12599385
ENDOSCOPE SYSTEM AND ENDOSCOPIC LIGATOR ATTACHMENT METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12594126
INTRALUMINAL NAVIGATION USING VIRTUAL SATELLITE TARGETS
2y 5m to grant Granted Apr 07, 2026
Patent 12569117
ENDOSCOPE
2y 5m to grant Granted Mar 10, 2026
Patent 12533012
METHOD FOR FIXING CABLES FOR ACTUATING THE DISTAL HEAD OF A MEDICAL DEVICE
2y 5m to grant Granted Jan 27, 2026
Patent 12507875
ENDOSCOPE AND ENDOSCOPE SYSTEM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
79%
Grant Probability
89%
With Interview (+10.0%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 119 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month