Prosecution Insights
Last updated: April 19, 2026
Application No. 18/343,860

PREDICTIVE VISUALIZATION OF MEDICAL IMAGING SCANNER COMPONENT MOVEMENT

Non-Final OA §103
Filed
Jun 29, 2023
Examiner
YANG, YI
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Globus Medical Inc.
OA Round
5 (Non-Final)
71%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
295 granted / 415 resolved
+9.1% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
39 currently pending
Career history
454
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
76.0%
+36.0% vs TC avg
§102
2.7%
-37.3% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/15/2025 has been entered. Claims 1-10 remain pending in the application, claims 11-20 are allowed. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-3 and 5-6 are rejected under 35 U.S.C. 103 as being unpatentable over Flexman U.S. Patent Application 20190371012 in view of Wang U.S. Patent Application 20160166333, and further in view of Finley U.S. Patent 9510771. Regarding claim 1, Flexman discloses a method of using a medical imaging scanner including a gantry having a movable C-arm supporting an imaging signal transmitter and a detector panel that are movable along an arc relative to a station (paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106.The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects; paragraph [0056]: the system 100 may include a real-time dose monitoring device and the graphic processing module 110 may be configured to generate contextual overlays 116 identifying individuals that have received a high dose of radiation based on measurements from the real-time dose monitoring device… move a radiation detector, etc. based on instructions from the database 122, from user input or in response to radiation detected by the dose monitoring device; paragraph [0027]: The imaging system 113 may be an x-ray system… or other imaging systems known in the art), the method comprising: under the control of a processor (paragraph [0029]: The interactive medical device 102 may include a workstation or console 103 from which a procedure is supervised and/or managed. Workstation 103 preferably includes one or more processors 105 and memory 107 for storing programs and applications): determining by the processor a pose of the movable C-arm (paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106; paragraph [0032]: The system further includes a graphic processing module 110 which is configured to generate a contextual overlay 116 on the augmented reality display device 106 on a specific region of the display device corresponding to the user's field of view; paragraph [0046]: overlays may be generated by the graphic processing module 110 which guide the user to manipulate a geometry module in order to move the c-arm of an imaging device to a specific position or orientation); generating a second virtual imaging path extending from a location of the imaging signal transmitter to a location of the detector panel based on the determined pose of the movable C-arm, wherein the second virtual imaging path represents an imaging path of a second imaging scan to be taken; generating a first virtual imaging path extending from an earlier defined location of the imaging signal transmitter to an earlier defined location of the detector panel based on the determined pose of the movable C-arm; wherein the first virtual imaging path represents an imaging path of a first medical scan that has been taken (paragraph [0048]: In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (first virtual imaging path) on the augmented reality display device 106 identifying a current isocenter of the c-arm. The graphic processing module 110 is also configured to generate a crosshair 150 (second virtual imaging path) identifying the target isocenter; paragraph [0049]: The user 101 may then view the position of the c-arm 154 and the current and target crosshair isocenters 148, 150); generating a virtual image of the imaging signal transmitter and the detector panel at the earlier defined location, wherein the first virtual imaging path extends between the generated virtual image of the imaging signal transmitter and the detector panel at the earlier defined location (paragraph [0048]: In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (earlier defined location) on the augmented reality display device 106 identifying a current isocenter of the c-arm; paragraph [0049]: The user 101 may then view the position of the c-arm 154 and the current and target crosshair isocenters 148, 150); displaying the generated first and second virtual beam paths, and the generated virtual image of the imaging signal transmitter and the detector panel at the earlier defined location on an augmented reality (AR) display device as an overlay to the medical imaging scanner to guide the user to the second imaging scan to be taken (paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106; paragraph [0046]: overlays may be generated by the graphic processing module 110 which guide the user to manipulate a geometry module in order to move the c-arm of an imaging device to a specific position or orientation; paragraph [0048]: In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (first virtual imaging path) on the augmented reality display device 106 identifying a current isocenter of the c-arm. The graphic processing module 110 is also configured to generate a crosshair 150 (second virtual imaging path) identifying the target isocenter; paragraph [0049]: The user 101 may then view the position of the c-arm 154 and the current and target crosshair isocenters 148, 150). Flexman discloses all the features with respect to claim 1 as outlined above. However, Flexman fails to disclose imaging signal transmitter and detector panel explicitly, the imaging signal transmitter and the detector panel at the earlier defined location and present location, and the second imaging path is orthogonal to the first imaging path. Wang discloses imaging signal transmitter and detector panel (paragraph [0138]: the controller 950 may be electrically coupled to the first light module 902... the controller (transmitter) may transmit signals to the first light module 902, the second light module 904, and the third light module 906 to activate the first, second, and third lasers and/or set the orientations of the first, second, and third lasers; paragraph [0101]: The laser guidance system can be attached to part of the C-arm (e.g. flat panel detector, image intensifier, X-ray tube, or the arm itself)), the second imaging path represents an imaging path of a second imaging scan to be taken; the first imaging path represents an imaging path of a first medical scan that has been taken; and the second imaging path is orthogonal to the first imaging path (paragraph [0102]: FIG. 10, a front elevation view illustrates an operating table and patient with a trajectory to be visualized with a targeting system attached to an imaging device in the form of a C-arm fluoroscopy unit, illustrated in two orthogonal imaging positions; Wang’s teaching of obtaining images based on planed imaging path can be combined with Flexman’s device, such that to generate virtual imaging path on the display). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman’s to use detector panel as taught by Wang, to provide an efficient targeting method. Flexman as modified by Wang discloses all the features with respect to claim 1 as outlined above. However, Flexman as modified by Wang fails to disclose the imaging signal transmitter and the detector panel at the earlier defined location and present location explicitly. Finley discloses the imaging signal transmitter and the detector panel at the earlier defined location and present location (col. 8 line 18-21: The screen display 100 may show the user an ideal (sample) lateral image 134 and a C-arm status indicator field 126 to ensure that the C-arm is positioned 90° from the previous cross-table position; col. 17 line 33-37: As the C-arm is moved from one spinal level to another, the virtual marker (e.g. dot 130) captures the C-arm's 26 current position as it moves up and down or left and right relative to the virtual A/P and lateral fluoroscopic images). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman and Wang’s to track previous and current position as taught by Finley, to track the location of surgical objects within the surgical field. Regarding claim 2, Flexman as modified by Wang and Finley discloses the method of Claim 1, further comprising adjusting, by a user, the C-arm position to center the imaging scanner based on the displayed virtual imaging paths (Flexman’s paragraph [0048]: in FIG. 4, in order to assist the user in isocentering the c-arm, the graphic processing module 110 may be configured to provide contextual overlays 116 which identify the current center of a c-arm and the target isocenter of the c-arm. In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (first virtual imaging path) on the augmented reality display device 106 identifying a current isocenter of the c-arm. The graphic processing module 110 is also configured to generate a crosshair 150 (second virtual imaging path) identifying the target isocenter). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman’s to use detector panel as taught by Wang, to provide an efficient targeting method; and combine Flexman and Wang’s to track previous and current position as taught by Finley, to track the location of surgical objects within the surgical field. Regarding claim 3, Flexman as modified by Wang and Finley discloses the method of Claim 1, wherein generating a second virtual imaging path includes generating a virtual path that diverges from a point source of the transmitter to the detector panel (Flexman’s paragraph [0048]: In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (virtual imaging path) on the augmented reality display device 106 identifying a current isocenter of the c-arm. The graphic processing module 110 is also configured to generate a crosshair 150 (virtual imaging path) identifying the target isocenter; paragraph [0038]: the target anatomy may first be isocentered so that the c-arm of the imaging device takes a path that rotates around the target anatomy in a manner wherein the target anatomy of the subject 111 remains in the center of the image; Wang’s paragraph [0138]: the controller 950 may be electrically coupled to the first light module 902... the controller (transmitter) may transmit signals to the first light module 902, the second light module 904, and the third light module 906 to activate the first, second, and third lasers and/or set the orientations of the first, second, and third lasers; paragraph [0101]: The laser guidance system can be attached to part of the C-arm (e.g. flat panel detector, image intensifier, X-ray tube, or the arm itself)). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman’s to use detector panel as taught by Wang, to provide an efficient targeting method; and combine Flexman and Wang’s to track previous and current position as taught by Finley, to track the location of surgical objects within the surgical field. Regarding claim 5, Flexman as modified by Wang and Finley discloses the method of Claim 1, further comprising: predictively determining a range of motion of the movable C-arm based on the determined pose without moving the movable C-arm; generating a graphical object based on the determined range of motion, wherein the graphical object indicates a clearance region that will not be contacted by the imaging signal transmitter and the detector panel during imaging; and displaying the generated graphical object on the AR display device as an overlay to the medical imaging scanner (Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman’s to use detector panel as taught by Wang, to provide an efficient targeting method; and combine Flexman and Wang’s to track previous and current position as taught by Finley, to track the location of surgical objects within the surgical field. Regarding claim 6, Flexman as modified by Wang and Finley discloses the method of Claim 5, wherein: generating a graphical object based on the determined range of motion comprises generating a first graphical object that represents the imaging signal transmitter and has a first pose that is rotated and offset to a first location along the arc relative to a present location of the imaging signal transmitter, generating a second graphical object that represents the detector panel and has a second pose that is rotated and offset to a second location along the arc relative to a present location of the detector panel (Flexman’s paragraph [0048]: In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (first graphical object) on the augmented reality display device 106 identifying a current isocenter of the c-arm. The graphic processing module 110 is also configured to generate a crosshair 150 (second graphical object) identifying the target isocenter; paragraph [0038]: the target anatomy may first be isocentered so that the c-arm of the imaging device takes a path that rotates around the target anatomy in a manner wherein the target anatomy of the subject 111 remains in the center of the image; Wang’s paragraph [0138]: the controller 950 may be electrically coupled to the first light module 902... the controller (transmitter) may transmit signals to the first light module 902, the second light module 904, and the third light module 906 to activate the first, second, and third lasers and/or set the orientations of the first, second, and third lasers; paragraph [0101]: The laser guidance system can be attached to part of the C-arm (e.g. flat panel detector, image intensifier, X-ray tube, or the arm itself)). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman’s to use detector panel as taught by Wang, to provide an efficient targeting method; and combine Flexman and Wang’s to track previous and current position as taught by Finley, to track the location of surgical objects within the surgical field. Claim 4 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Flexman U.S. Patent Application 20190371012 in view of Wang U.S. Patent Application 20160166333, in view of Finley U.S. Patent 9510771, and further in view of Siewerdsen U.S. Patent Application 20140049629. Regarding claim 4, Flexman as modified by Wang and Finley discloses receiving a digital image of the medical imaging scanner (Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106; imaging scanner is mounted on c-arm). However, Flexman as modified by Wang and Finley fails to disclose identifying within the digital image a location and orientation of a plurality of spaced-apart navigation markers attached to the gantry, and identifying a pose of at least one of the imaging signal transmitter and the detector panel based on the location and orientation of the plurality of spaced-apart tracking markers. Siewerdsen discloses identifying within the digital image a location and orientation of a plurality of spaced-apart navigation markers attached to the gantry, and identifying a pose of at least one of the imaging signal transmitter and the detector panel based on the location and orientation of the plurality of spaced-apart tracking markers (paragraph [0027]: Affixing multiple markers in known arrangements to surgical tools and other structures to be tracked allows real-time measurement of pose in the world (tracker) reference frame. The pose of the markers is defined by the position and orientation; paragraph [0026]: the tracker device 12 may be mounted to (or proximate to) an image receptor 15 of the imaging or treatment device 14; paragraph [0044]: Direct mounting of a surgical tracker 12 on a rotational C-arm 14 (or other imaging or therapy device) offers potential performance and functional advantages… The hex-face reference marker demonstrates capability to maintain registration in a dynamic reference frame, e.g., across a full C-arm range of rotation and, by virtue of its perspective over the operating table… using as many faces as visible for any pose measurement are also possible). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman, Wang and Finley’s to use marker to determine pose as taught by Siewerdsen, to provide a tracking and navigation system with improved field-of-view and line-of-sight for the tracker as well as improved accuracy, ease of set up, alignment and calibration. Regarding claim 8, Flexman as modified by Wang, Finley and Siewerdsen discloses the method of claim 5, further comprising: determining by the processor that a physical object which is separate from the gantry has a surface that extends from a location outside the graphical object displayed on the AR display device to another location that is within the graphical object; and performing a collision alert action responsive to the determination (Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region; Siewerdsen’s paragraph [0043]: Video augmentation capability could include a direct image of the patient from the perspective of the treatment device, augmented by overlay of various imaging and/or treatment planning information; paragraph [0038]: in the case of mobile C-arm CT, external factors can interfere with CT acquisition (such as a collision between the C-arm and the patient bed during the scan, metallic artifact due to surrounding metallic parts, etc.) should also be taken into account during this process… the field of view may be visualized overlaid onto the video image in real-time). Therefore, it would be obvious before the effective filing date of the claimed invention to combine Flexman, Wang and Finley’s to use marker to determine pose as taught by Siewerdsen, to provide a tracking and navigation system with improved field-of-view and line-of-sight for the tracker as well as improved accuracy, ease of set up, alignment and calibration. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Flexman U.S. Patent Application 20190371012 in view of Wang U.S. Patent Application 20160166333, in view of Finley U.S. Patent 9510771, in view of Siewerdsen U.S. Patent Application 20140049629, and further in view of Allred U.S. Patent Application 20080144906. Regarding claim 7, Flexman as modified by Wang, Finley and Siewerdsen discloses AR display device (Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106). However, Flexman as modified by Wang, Finley and Siewerdsen fails to discloses repeating the steps of generating the graphical object for a plurality of locations along the arc and the providing the graphical object to the display device to animate repetitive movement of the imaging signal transmitter and the detector panel through at least part of the range of motion along the arc. Allred discloses repeating the steps of generating the graphical object for a plurality of locations along the arc and the providing the graphical object to the display device to animate repetitive movement of the imaging signal transmitter and the detector panel through at least part of the range of motion along the arc (paragraph [0033]: the detector 34 may be moved from a zero angular reference point through 145 degree of rotation while image exposures 32 are taken at predefined arc intervals to obtain a set of image exposures used to construct a 3-D volume). Therefore, it would be obvious to one of ordinary skill in the art at the time of the invention was made to combine Flexman, Wang, Finley and Siewerdsen’s to take image at predefined arc intervals as taught by Allred, to capture and replay of full resolution video in surgical navigation. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Flexman U.S. Patent Application 20190371012 in view of Wang U.S. Patent Application 20160166333, in view of Finley U.S. Patent 9510771, in view of Siewerdsen U.S. Patent Application 20140049629, and further in view of Higuchi U.S. Patent Application 20090316956. Regarding claim 9, Flexman as modified by Wang, Finley and Siewerdsen discloses collision risk for the at least one of the imaging signal transmitter and the detector panel when moved along the arc through the range of motion (Flexman’s paragraph [0050]: FIG. 5 shows an image 153 of an imaging device having a c-arm 154 on an augmented reality display device 106. The graphic processing module 110 is configured to generate a contextual overlay 156 which represents the areas needed for clearing during an image acquisition process so that the c-arm 154 may freely move without hitting any objects. The user when viewing the contextual overlay 156 on the augmented reality display device can easily determine if an object is within the region that requires clearance and may then remove the object from the region). However, Flexman as modified by Wang, Finley and Siewerdsen fails to disclose providing another graphical object for display as an overlay relative to the physical object and that identifies the physical object as being a collision risk. Higuchi discloses providing another graphical object for display as an overlay relative to the physical object and that identifies the physical object as being a collision risk (paragraph [0047]: the recognition result is superimposed on an input image is outputted to the display 15 and the recognition result is transmitted to the control unit 2 and if a risk of collision is determined under the control of the control unit 2, a control operation for mitigation or avoidance of collision is carried out or either issuance of an alarm sound or display of an alarm image is carried out to inform the vehicle driver of the emergency). Therefore, it would be obvious to one of ordinary skill in the art at the time of the invention was made to combine Flexman, Wang, Finley and Siewerdsen’s to overlay alarm image as taught by Higuchi, to detect obstacles and avoid collision. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Flexman U.S. Patent Application 20190371012 in view of Wang U.S. Patent Application 20160166333, in view of Finley U.S. Patent 9510771, in view of Siewerdsen U.S. Patent Application 20140049629, and further in view of Feiten U.S. Patent Application 20110224904. Regarding claim 10, Flexman as modified by Wang, Finley and Siewerdsen discloses all the features with respect to claim 8 as outlined above. However, Flexman as modified by Wang, Finley and Siewerdsen fails to disclose communicating a command to the medical imaging scanner that disables electronic movement of the movable C-arm at least in a direction that may collide with the physical object. Feiten discloses communicating a command to the medical imaging scanner that disables electronic movement of the movable C-arm at least in a direction that may collide with the physical object (paragraph [0041]: FIG. 5 shows a further variant of the inventive method for collision avoidance based on the circular movement of the C-arm 1 along path B upon the calculation of an avoidance or stopping movement). Therefore, it would be obvious to one of ordinary skill in the art at the time of the invention was made to combine Flexman, Wang, Finley and Siewerdsen’s to disable electronic movement as taught by Feiten, to create monitoring of the spatial environment of a mobile device, in which actions for the prevention of collisions in the case of a danger of collision are performed with a minimal time delay. Allowable Subject Matter Claims 11-20 are allowed. The following is an examiner’s statement of reasons for allowance: Claim 11 is about a method of using an x-ray imaging scanner including a gantry mount, a gantry slidably mounted to the gantry mount and having a movable C-arm supporting an imaging signal transmitter and a detector panel that are movable along an arc relative to the gantry mount and a plurality of navigation markers attached to the gantry mount for determining a pose of the gantry mount, the gantry mount remaining stationary while the moveable C-arm moves, the method comprising: under the control of a processor of a computer system: obtaining a digital image of the x-ray medical imaging scanner from a navigation camera; determining a pose of the gantry mount of the medical imaging scanner based on the plurality of navigation markers contained in the digital image; generating a second virtual imaging path extending from a present location of the imaging signal transmitter to a present location of the detector panel based on the determined pose of the movable C-arm, wherein the second virtual imaging path represents an imaging path of a second imaging scan to be taken; generating a first virtual imaging path extending from an earlier defined location of the imaging signal transmitter to an earlier defined location of the detector panel based on the determined pose of the movable C-arm, wherein the first virtual imaging path represents an imaging path of a first medical scan that has already been taken; displaying the generated first and second virtual beam paths simultaneously on an augmented reality (AR) display device as an overlay to the medical imaging scanner to guide the user to the second imaging scan to be taken in the future, wherein the second virtual imaging path is orthogonal to the first virtual imaging path, wherein the movable C-arm includes a first arm portion slidably coupled to a gantry mount of the gantry and a second arm portion slidably coupled to the first arm portion. Flexman 20190371012, Siewerdsen 20140049629, Wang 20160166333, Hou 20170347981 and Finley U.S. Patent 9510771 combined cannot discloses these limitations perfectly. These limitations when read in light of the rest of the limitations in the claim make the claim allowable subject matter. Claim 12-20 depend on claim 11, are allowed based on same reason as claim 11. Response to Arguments Applicant's arguments filed 12/15/2025, page 8 - 10, with respect to the rejection(s) of claim(s) 1 under 103, have been fully considered and are moot upon a new ground(s) of rejection made under 35 U.S.C. 103 as being unpatentable over Flexman U.S. Patent Application 20190371012 in view of Wang U.S. Patent Application 20160166333, and further in view of Finley U.S. Patent 9510771, as outlined above. Applicant argues on page 8-9 that claim 1 have been amended to recite that “a first image has already been taken and the second image is about to be taken”. In reply, the rejection is based on Flexman, Wang and Finley combined. Flexman’s paragraph [0048]: In FIG. 4, the graphic processing module 110 is configured to generate a crosshair 148 (current position to take the image) on the augmented reality display device 106 identifying a current isocenter of the c-arm. The graphic processing module 110 is also configured to generate a crosshair 150 (next position about to take the image) identifying the target isocenter; paragraph [0049]: The user 101 may then view the position of the c-arm 154 and the current and target crosshair isocenters 148, 150. Finley discloses the imaging signal transmitter and the detector panel at the earlier defined location and present location (col. 8 line 18-21: The screen display 100 may show the user an ideal (sample) lateral image 134 and a C-arm status indicator field 126 to ensure that the C-arm is positioned 90° from the previous cross-table position; col. 17 line 33-37: As the C-arm is moved from one spinal level to another, the virtual marker (e.g. dot 130) captures the C-arm's 26 current position as it moves up and down or left and right relative to the virtual A/P and lateral fluoroscopic images). Applicant argues on page 9-10 that Wang’s beam path can never represent a path that was already taken. Rather, both beams can only be simultaneously shown to take only one image. In reply, Wang’s paragraph [0101]: The laser guidance system can be attached to part of the C-arm (e.g. flat panel detector, image intensifier, X-ray tube, or the arm itself). It’s well known in the art that C-arm can rotate to take images from different orientations. paragraph [0039]: FIG. 10 is a front elevation view of an operating table, patient, and a trajectory to be visualized with a targeting system attached to a C-arm fluoroscopy unit. In summary, the examiner suggests applicant consider to incorporate claim 4 into claim 1 to further advance the prosecution. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Yi Yang whose telephone number is (571)272-9589. The examiner can normally be reached on Monday-Friday 9:00 AM-6:00 PM EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /YI YANG/ Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Jun 29, 2023
Application Filed
Jul 24, 2024
Non-Final Rejection — §103
Oct 25, 2024
Response Filed
Dec 05, 2024
Final Rejection — §103
Mar 06, 2025
Request for Continued Examination
Mar 12, 2025
Response after Non-Final Action
Apr 04, 2025
Non-Final Rejection — §103
Jul 07, 2025
Response Filed
Aug 13, 2025
Final Rejection — §103
Dec 15, 2025
Request for Continued Examination
Jan 14, 2026
Response after Non-Final Action
Mar 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586304
PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12567129
Image Processing Method and Electronic Device
2y 5m to grant Granted Mar 03, 2026
Patent 12561276
SYSTEMS AND METHODS FOR UPDATING MEMORY SIDE CACHES IN A MULTI-GPU CONFIGURATION
2y 5m to grant Granted Feb 24, 2026
Patent 12541902
SIGN LANGUAGE GENERATION AND DISPLAY
2y 5m to grant Granted Feb 03, 2026
Patent 12541896
COMPUTER-BASED CONTENT PERSONALIZATION OF A VISUAL DISPLAY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+17.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month