Prosecution Insights
Last updated: April 19, 2026
Application No. 18/673,624

METHOD FOR OPERATING A VISUALIZATION SYSTEM IN A SURGICAL APPLICATION, AND VISUALIZATION SYSTEM FOR A SURGICAL APPLICATION

Non-Final OA §102§Other
Filed
May 24, 2024
Examiner
GALERA, PATRICK PAUL CONTRER
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Carl Zeiss Meditec AG
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
6 granted / 7 resolved
+23.7% vs TC avg
Strong +17% interview lift
Without
With
+16.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
21 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
72.9%
+32.9% vs TC avg
§102
18.8%
-21.2% vs TC avg
§112
5.2%
-34.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§102 §Other
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The drawings are objected to under 37 CFR 1.83(a) because they fail to show: Details in Fig. 4 as described in the specification. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-2, 5-10 are rejected under 35 U.S.C. 102a1/a2 as being anticipated by Chiou et al. (US 12211151 B1, hereinafter “Chiou”). Regarding claim 10, A visualization system for a surgical application (Chiou: Claim 1, “A system for improving accuracy of an augmented reality display for a surgical procedure, . . .”; Abstract: “. . . devices and methods for performing a surgical step or surgical procedure with visual guidance using a head mounted display with systems . . .”), comprising: a capturing device configured for capturing at least one image representation of a region to be operated on and/or operated on (Chiou: col 11 lines 52-55, “In some embodiments, one or more head mounted displays can be used to display volume data or surface data, e.g. of a patient, of imaging studies, of graphical representation and/or CAD files”; col 122 lines 6-12, “. . . Live data 18 of the patient, for example from the surgical site, . . . using one or more IMUs, optical markers, navigation markers, image or video capture systems and/or spatial anchors . . .”; NOTE: The claimed image representation is Chiou’s Live data 18/graphical representation. The region to be operated on is the surgical site.), a main display device configured for displaying the at least one image representation captured (Chiou: col 154 lines 54-61, “. . . view on the display of the standalone or separate computer or display monitor the viewer can have the concurrent benefit of viewing the data and/or images using the full intended field of view of patient data . . .”; NOTE: The main display device is the display monitor. The image representation captured that is displayed is the images of patient data viewed on the display monitor.), a visualization device that can be worn on the head with a display device (Chiou: Fig. 7; col 176 lines 8-11, “. . . The resultant 3D model of the patient's bone using any of these techniques can then be displayed by one or more OHMD's. . .”; col 27 lines 51-61 to col 28 line 1, “Exemplary optical see through head mounted displays include . . . the Microsoft Hololens and Hololens 2 . . . It is a pair of augmented reality smart glasses. Hololens is a see through optical head mounted display (or optical see through head mounted display) 1125 (see FIG. 7). . . The visor includes a pair of transparent combiner lenses, in which the projected images are displayed. . . “; NOTE: The display device of the OHMD is the pair of transparent combiner lenses.), a pose sensor system configured for capturing a pose of the visualization device that can be worn on the head relative to a display surface of the main display device (Chiou: col 143 lines 43-58, “. . . In some embodiments, the HMD system can detect, e.g. automatically, if the surgeon or operator is looking at a computer or display monitor separate from the HMD, for example, with use of an image and/or video capture system and/or 3D scanner integrated into, attached to or separate from the HMD. . . The image and/or video capture system and/or 3D scanner can, for example, capture the outline of the computer or display monitor, e.g. round, square or rectangular, and the software can, optionally, automatically match, superimpose or align the items or structures displayed by the HMD with the items or structures displayed by the standalone or separate computer or display monitor. . .; NOTE: The pose sensor system is the image/video/3d scanner system integrated to Chiou’s HMD system. The pose is the detected head position of the operator when looking at the display monitor. The HMD system detects the head position of the operator wearing the HMD when looking at the display monitor, therefore, it captures the pose of the visualization device. The pose is relative to the display surface because the HMD system detects that the operator is looking at the monitor. Operator looks at the display monitor >> HMD system detects that the operator is looking at the display monitor.), and a control device (), wherein the control device is configured to generate and/or to provide at least one three- dimensional augmentation information item corresponding to the at least one image representation displayed and to communicate it for display to the display device of the visualization device that can be worn on the head (Chiou: col 2 lines 37-43, “. . . , a system for adjusting an augmented reality display for a surgical procedure is provided. In some embodiments, the system comprises a see through optical head mounted display, and at least one computer processor . . . configured to generate a 3D stereoscopic display by the see through optical head mounted display. . .”; col 215 lines 13-17, “. . . a computer processor can display one or more x-rays, a CT scan or MRI scan [optionally displayed by the OHMD as one or more 2D slices or a 3D reconstruction of the anatomy] using the OHMD superimposed onto and/or aligned with the corresponding anatomic structures of the patient. . .” col 20 lines 40-44, “. . . the one or more head mounted displays display . . . 2D or 3D images of the patient, . . .”; col 60 lines 52-59, “. . . An head mounted display can project or display a digital hologram of virtual data or virtual data of the patient 55. . . . superimposed and aligned with the live data of the patient, e.g. the surgical site . . .”; col 160 lines 39-41, “the HMD can display a virtual 2D or 3D image of the patient's normal or diseased tissue or an organ or a surgical site . . .”; col 168 lines 42-57, “. . . virtual data of the patient . . . virtual surgical sites . . . can be displayed by the HMD in two, three or more dimensions. . .”; NOTE: The control device is the computer processor. The computer processor generates 3d images of the patient and displayed to the head mounted device. The 3d augmentation information item is the 3d images, corresponding to the patient, generated by the computer processor. The computer processor inherently communicates with the display device of the HMD so it can be viewed by the operator wearing the HMD. Live data of patient >> computer processor generates 3d images of patient >> communicates with the HMD display device for viewing in augmented reality), and to carry out the generating and/or providing of the at least one three-dimensional augmentation information item in consideration of the captured pose (col 143 lines 43-67 to col 144 lines 1-3, “. . . the HMD system can detect, e.g. automatically, if the surgeon or operator is looking at a computer or display monitor separate from the HMD, . . . The standalone or separate computer or display monitor can be used, for example, to display image data, e.g. of a patient, or to concurrently display virtual data displayed by the HMD. The image and/or video capture system and/or 3D scanner can, for example, capture the outline of the computer or display monitor, e.g. round, square or rectangular, and the software can, optionally, automatically match, superimpose or align the items or structures displayed by the HMD with the items or structures displayed by the standalone or separate computer or display monitor. . .; NOTE: The captured pose is the HMD system detecting the operator of the HMD looking at a separate display monitor. Both the HMD and the separate display monitor can display image data of the patient. The system then carries out the generating the three-dimensional augmentation information, which is the patient’s image data, in consideration of the captured or detected operator’s pose looking at the display monitor so the images displayed by the HMD and the separate display monitor can be aligned. ) in such a way that the at least one image representation displayed on the main display device is extended into a three-dimensional region by the at least one three-dimensional augmentation information item corresponding to said at least one image representation (Chiou: col: 145 lines 48-54, “. . . the HMD display can display structures extending beyond the portion of the visual field occupied by the standalone or separate computer or display monitor. The structures extending beyond the portion of the visual field occupied by the standalone or separate computer or display monitor can be continuous with the structures displayed by the standalone or separate computer or display monitor . . .” NOTE: The image representation is the structures displayed by the display monitor. The three-dimensional region is the visual field. The 3d augmentation information is the 3D stereoscopic view generated by the computer processor corresponding to the patient’s image data. When a structure extends beyond the portion of the visual field occupied by the display monitor, it extends into a 3d region viewed by the operator through the HMD.), and wherein: a topography of the region to be operated on and/or operated on is captured and/or received, and the at least one three-dimensional augmentation information item is generated and/or provided in consideration of the captured and/or received topography (Chiou: col 40 lines 28-35, “. . . optical imaging systems or 3D scanners . . . used to image . . . patient surface(s), organ surface(s), tissue surface(s), pathologic tissues and/or surface(s), e.g. for purposes of registration, e.g. of the patient and/or the surgical site, e.g. . .” col 59 lines 5-23, “Live data, e.g. live data of the patient. . . can be acquired or registered, for example, using a spatial mapping process. This process creates a three-dimensional mesh describing the surfaces of one or more objects or . . . generate 3D surface data by . . . resulting in a three-dimensional surface representation of the live data. . .”; NOTE: The region to be operated on/operated on is the imaged surgical site. 3D scanning is used to capture the topography, which is the surface features, of an object, in Chiou, the topography or surface feature of patient/organ/tissue/surgical site surface features. The 3D augmentation information item is the resulting 3D surface representation. Since the resulting 3D representation is based on the live data of the patient’s topography or surface features, therefore, therefore the resulting 3D representation is generated in consideration of the captured topography. 3D scan to capture surface features, which is topography of the patients live data >> generates 3D surface representation in consideration of the captured surface features of the patient’s live data.), and/or three-dimensional tomographic data corresponding to the region to be operated on and/or operated on are captured and/or received, and the at least one three- dimensional augmentation information item is generated and/or provided in consideration of the captured and/or received three-dimensional tomographic data (Chiou: col. 110 lines 35-40, “. . . the preoperative imaging can entail a cross-sectional imaging modality, e.g. computed tomography, which can optionally generate 3D data of the patient, e.g. in the form of a spiral or a helical CT scan and, optionally, a 3D reconstruction. The 3D data of the patient, e.g. the spiral or helical CT scan or 3D reconstruction. . .”; NOTE: Chiou also uses computed tomography to capture tomographic data of the patient. The 3D tomographic data captured is the generated 3d data of the patient acquired using computed tomography. The patient is the region to be operated on/operated on. The 3d augmentation information item is the generated 3d reconstruction. Capture 3d tomographic data using computed tomography >> generate 3d reconstruction in consideration of the captured 3d tomographic data). Regarding claim 1, method claim 1 is drawn to the method corresponding to the configuration of using same as claimed in apparatus claim 10. Therefore, method claim 1 corresponds to the configuration in the apparatus of claim 10, and is rejected for the same reasons of anticipation as used above. Regarding claim 2, depending on 1, Chiou teaches: The method as claimed in claim 1, wherein a pose of the capturing device relative to the region to be operated on and/or operated on is determined (Chiou: col 111, lines 24-37, “. . . the calibration/registration phantom can be used 1.) To estimate distance, position, orientation of HMD from the patient, for primary or back-up registration, for example used in conjunction with an image and/or video capture system integrated into, attached to or coupled to or separate from the HMD 2.) To estimate distance, position, orientation of target tissue or surgical site underneath the patient's skin, e.g. after cross-registration with pre-operative and/or intra-operative imaging data 3.) To estimate the path of a surgical instrument or to estimate the location of a desired implantation site for a medical device or implant or transplant 4.) To update a surgical plan . . .; NOTE: The pose of the capturing device is the distance, position, orientation of the HMD in relation to the patient.), wherein the at least one three-dimensional augmentation information item is generated and/or provided in consideration of the determined pose of the capturing device (Chiou: col 113 lines 59-67 to col 114 lines 1-9, “. . . with an image and/or video capture system integrated into or attached to the HMD or coupled to the HMD, any change in the position, location or orientation of the surgeon's or operator's head or body will result in a change in the perspective view and visualized size and/or shape of the calibration or registration phantom. The change in perspective view and visualized size and/or shape of the calibration or registration phantom can be measured and can be used to determine the change in position, location or orientation of the surgeon's or operator's head or body, which can then be used to maintain registration between the virtual patient data and the live patient data, by moving the virtual patient data into a position, location, orientation and/or alignment that ensures that even with the new position location or orientation of the surgeon's or operator's head or body the registration is maintained and the virtual and the live patient data are, for example, substantially superimposed or matched where desired . . .”; NOTE: The system considers the change in pose determined when the operator’s head or body moves to generate the virtual patient data, which is the 3d augmentation information item.). Regarding claim 5, depending on 1, Chiou teaches: The method as claimed in claim 1, wherein with reference to the at least one image representation displayed and/or the captured and/or received topography and/or the captured and/or received tomographic data at least one three-dimensional marking is generated and/or received (Chiou: col 103 lines 62-67 to col 104 lines 1-14, “. . . intraoperative imaging, for example using x-ray imaging or CT imaging and/or ultrasound imaging, can be performed. Virtual patient data obtained intraoperatively using intraoperative imaging can be used to register virtual patient data obtained preoperatively, for example using preoperative x-ray, ultrasound, CT or MRI imaging. The registration of preoperative and intraoperative virtual data of the patient and live data of the patient in a common coordinate system with one or more HMDs can be performed, for example, by identifying and, optionally, marking corresponding landmarks, surfaces, object shapes, e.g. of a surgical site or target tissue, in the preoperative virtual data of the patient, the intraoperative virtual data of the patient, e.g. on electronic 2D or 3D images of one or more of the foregoing, and the live data of the patient. . .”; NOTE: CT = Computed Tomography. The three-dimensional marking generated/received are the identified markers marking corresponding landmarks, surfaces, object shapes based from the CT data, which is the tomographic data. Col 59 lines 5-23, “Live data, e.g. live data of the patient. . . can be acquired or registered, for example, using a spatial mapping process. This process creates a three-dimensional mesh describing the surfaces of one or more objects . . . can generate 3D surface data by collecting, for example, 3D coordinate information or information on the distance from the sensor of one or more surface points on the one or more objects or environmental structures. The 3D surface points can then be connected to 3D surface meshes, resulting in a three-dimensional surface representation of the live data. The surface mesh can then be merged with the virtual data using any of the registration techniques described in the specification” NOTE: spatial mapping = topography mapping. Chiou uses spatial mapping to acquire topography or surface data of the patient. The 3D markings are the surface points.), wherein the at least one three-dimensional augmentation information item is generated and/or provided in consideration of the at least one three-dimensional marking (NOTE: Col 59 lines 5-23, “. . . The 3D surface points can then be connected to 3D surface meshes, resulting in a three-dimensional surface representation of the live data. . .”; NOTE: The 3d markings are the surface points. The 3d augmentation information item is the resulting 3d surface representation of the live data.) Regarding claim 6, depending on 1, Chiou teaches: The method as claimed in claim 1, wherein a pose of at least one actuation element is captured (Chiou: col 19 lines 41-45, “. . . In some embodiments, the surgical plan is used to derive one or more of a location, position, orientation, alignment, trajectory, plane, start point, or end point for one or more surgical instruments. . .”; col 208 lines 31-67 to col 209 lines 1-11, “Any of the registration techniques or techniques described herein including implantable and attachable markers, calibration and registration phantoms including optical markers, navigation markers, . . . can be applied for registering the patient's proximal femur in relationship to . . . one or more surgical instruments, . . .” NOTE: The surgical instruments is an actuation element. The captured pose is the derived location, position, orientation, alignment, trajectory, start/end point of the surgical instruments for registration in relation to the patient.) wherein the at least one three-dimensional augmentation information item is generated and/or provided in consideration of the captured pose of the at least one actuation element Chiou: col 208 lines 57-67 to col 209 lines 1-11, “. . . By registering the optical marker and/or patient specific marker or template in relationship to the OHMD also, e.g. in a common coordinate system with the OHMD, the surgical site, the proximal femur, the OHMD can display or superimpose and/or project digital holograms with different view coordinates for the left eye and the right eye of the user wearing the OHMD showing the desired or predetermined position, location, orientation, alignment and/or trajectory or predetermined plane of any surgical instrument including a saw . . . the OHMD can show the desired 3D trajectory including the desired location, entry point and angles in x, y and z direction for the femoral neck cut or the OHMD can display one or more digital holograms of a virtual cut plane and/or a virtual saw or saw blade in the position, location, angular orientation, and trajectory (e.g. as a dotted line or arrow) . . .”; NOTE: The generated holograms including a virtual saw and dotted line trajectory are the 3d augmentation information item generated in consideration of the pose of the saw. If the pose of the saw is not captured or considered, the desired 3d trajectory can’t be generated if the system does not know where the physical saw is positioned. Chiou’s system detects a surgical instrument such as a saw and registers or captures its pose >> generates hologram including a virtual saw with desired 3d trajectory.) Regarding claim 7, depending on 6, Chiou teaches: The method as claimed in claim 6, wherein a trajectory is generated from the captured pose of the at least one actuation element, wherein the at least one three-dimensional augmentation information item is generated and/or provided in consideration of the generated trajectory (Chiou: col 208 lines 57-67 to col 209 lines 1-11, the OHMD can display or superimpose and/or project digital holograms with different view coordinates for the left eye and the right eye of the user wearing the OHMD showing the desired or predetermined position, location, orientation, alignment and/or trajectory or predetermined plane of any surgical instrument including a saw . . . the OHMD can show the desired 3D trajectory including the desired location, entry point and angles in x, y and z direction for the femoral neck cut or the OHMD can display one or more digital holograms of a virtual cut plane and/or a virtual saw or saw blade in the position, location, angular orientation, and trajectory (e.g. as a dotted line or arrow) . . .”; NOTE: The generated trajectory is the digital hologram of the desired 3d trajectory of the surgical instrument. Chiou’s system detects a surgical instrument such as a saw and registers or captures its pose >> generates hologram including a virtual saw with desired 3d trajectory.) Regarding claim 8, depending on 1, The method as claimed in claim 1, wherein additionally at least one two - dimensional augmentation information item is generated and/or provided (Chiou col. 149-150 lines 1-5, “. . . the user or surgeon can view the standalone or separate computer or display monitor through the HMD display, the user or surgeon can experience a combination of 2D and 3D display information, e.g. of virtual anatomy of the patient and/or aspects of the virtual surgical plan . . .”; Chiou col 145 lines 48-67, “. . . the HMD display can display items such as vital signs or patient demographics, or pre-operative imaging. . .”; NOTE: Textual information such as vital signs and demographics are 2D augmentation information items generated or provided for display by the HMD display.), wherein the at least one two-dimensional augmentation information item is displayed by means of the display device of the visualization device that can be worn on the head (NOTE: The HMD is the visualization device that can be worn on the head as cited above Chiou col 145 lines 48-67) in such a way that said information is at least partly superimposed on the display surface of the main display device (Chiou col. 149-150 lines 1-5, “The different 2D and 3D displays by the HMD display and the standalone or separate computer or display monitor can be displayed and viewed simultaneously, in many embodiments substantially or partially superimposed. Since the user or surgeon can view the standalone or separate computer or display monitor through the HMD display, the user or surgeon can experience a combination of 2D and 3D display information, e.g. of virtual anatomy of the patient and/or aspects of the virtual surgical plan, not previously achievable”; NOTE: The main display device is the display monitor.) and/or that said information extends the display surface of the main display device (Chiou col 145 lines 48-67, “. . . the HMD display can display structures extending beyond the portion of the visual field occupied by the standalone or separate computer or display monitor. The structures extending beyond the portion of the visual field occupied by the standalone or separate computer or display monitor can be continuous with the structures displayed by the standalone or separate computer or display monitor. The structures outside the portion of the visual field occupied by the standalone or separate computer or display monitor can be separate and/or from the structures displayed by the standalone or separate computer or display monitor. For example, in addition to displaying one or more structures matching or corresponding to what is displayed by the standalone or separate computer or display monitor, the HMD display can display items such as vital signs or patient demographics, or pre-operative imaging studies in those portions of the visual field that do not include the standalone or separate computer or display monitor. This can be useful when the user, operator and/or surgeon is not looking at the patient. . .”), Regarding claim 9, depending on 1, Chiou teaches: The method as claimed in claim 1, wherein a pose of at least one further visualization device that can be worn on the head relative to the display surface of the main display device is captured by means of the pose sensor system and/or a further pose sensor system, and wherein at least one further three-dimensional augmentation information item corresponding to the at least one image representation displayed is generated and/or provided and is displayed on a further display device of the at least one further visualization device that can be worn on the head, wherein the at least one further three-dimensional augmentation information item is generated and/or provided in consideration of the captured pose of the at least one further visualization device that can be worn on the head in such a way that the at least one image representation displayed on the main display device is extended into a three - dimensional region by the at least one further three-dimensional augmentation information item corresponding to said at least one image representation. (Chiou: col 3 lines 49-52, “FIG. 1 shows the use of multiple HMD's for multiple viewer's, e.g. a primary surgeon, second surgeon, surgical assistant(s) and/or nurses(s) according to some embodiments of the present disclosure.”; col 36 lines 36-45, “Virtual data of the patient can be projected superimposed onto live data of the patient for each individual viewer by each individual HMD for their respective view angle or perspective by registering live data of the patient, e.g. the surgical field, and virtual data of the patient as well as each HMD in a common, shared coordinate system. Thus, virtual data of the patient including aspects of a virtual surgical plan can remain superimposed and/or aligned with live data of the patient irrespective of the view angle or perspective of the viewer and alignment and/or superimposition can be maintained as the viewer moves his or her head or body”; col35 lines: 38-48, “. . . Similarly, when multiple HMDs are used, e.g. one for the primary surgeon and additional ones, e.g. two, three, four or more, for other surgeons, assistants, residents, fellows, nurses and/or visitors, the HMDs worn by the other staff, not the primary surgeon, will also display the virtual representation(s) of the virtual data of the patient aligned with the corresponding live data of the patient seen through the HMD, wherein the perspective of the virtual data that is with the patient and/or the surgical site for the location, position, and/or orientation of the viewer's eyes for each of the HMDs used and each viewer. . .”) (NOTE: The further visualization device of claim is the HMD worn by other staff, not the primary surgeon, as disclosed by Chiou. The generated virtual data of the patient is respective to the individual HMD. The other HMDs used by the other staff has the same capabilities of the HMD of the primary surgeon. Claim 9 corresponds to a method of operating a visualization system in a surgical application to include a further visualization device, worn by other staff as disclosed by Chiou, having the same limitations as claimed in claim 1 satisfying the limitations of claim 9.) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK GALERA whose telephone number is (571)272-5070. The examiner can normally be reached Mon-Fri 0800-1700 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at 571-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK P GALERA/Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

May 24, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602567
SYSTEM AND METHOD FOR RENDERING A VIRTUAL MODEL-BASED INTERACTION
2y 5m to grant Granted Apr 14, 2026
Patent 12597184
IMAGE PROCESSING METHOD AND APPARATUS, DEVICE AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12586549
Image conversion apparatus and method having timing reconstruction mechanism
2y 5m to grant Granted Mar 24, 2026
Patent 12579921
ELECTRONIC DEVICE HAVING FLEXIBLE DISPLAY AND METHOD FOR CONTROLLING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12491085
SYSTEMS AND METHODS FOR ORTHOPEDIC IMPLANT FIXATION
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
99%
With Interview (+16.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month