DETAILED ACTION
This Office action is responsive to communications filed on 01/26/2026. Claims 1-2, 9, 12-13, 15, & 17-19, have been amended. Claims 11 and 14 are canceled. Claim 20 is withdrawn. Presently, Claims 1-10, 12-13, and 15-20 remain pending and are hereinafter examined on the merits.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/26/2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claim(s) have been considered but are moot. In view of the claim amendment, the previously issued rejections under 35 U.S.C. 103 are now withdrawn and a new ground of rejection is now made to all pending claims. The new ground of rejection to the claims does not rely solely on Yang et al (US 2013/0060146 A1) in view of Kumar et al (US 2014/0073907 A1) applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The new grounds of rejection now relied upon Yang et al (US 2013/0060146 A1) in view of Moctezuma de la Barrera et al (US20190069882A1).
Applicant's arguments filed with respect to rejection under 35 USC 101 have been fully considered but they are not persuasive.
The Examiner directs the Applicant’s attention provided in the Office Action regarding the grounds for rejection of the claims under 35 U.S.C. 101 in view of the amendments filed on 01/26/2026. Specifically, the Examiner response is set forth in the rejection under 35 U.S.C. 101 below, where it is noted that claim 2 is eligible. Specifically, with regards to positively controlling the robot to execute the updated surgical plan based on the navigations (i.e., control[ing] a robot to execute the updated surgical plan based on navigation guidance); and therefore, as a whole thus amounts to significantly more than the exception itself.
Claim Objections
The following claims are objected to because of the following informalities and should recite:
Claim 1: line 7, “using an ultrasound imaging device”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10, 12-13, and 15-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1 of the subject matter eligibility test (see MPEP 2106.03).
Claim 1-10, 12 are directed to a “method” which describes one of the four statutory categories of patentable subject matter, i.e., a process.
Claims 13, 15-19 are drawn to a “system” which describes one of the four statutory categories, i.e., a machine.
Step 2A of the subject matter eligibility test (see MPEP 2106.04).
Prong One:
Claim 1 recite (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows:
“correlating a representation of the first bony anatomical object in the tomographic or topographic image data to a representation of the first and second bony anatomical objects in the first image data;”
Claims 13 recite (“sets forth” or “describes”) the abstract idea of “a mental process” (MPEP 2106.04(a)(2).III.), substantially as follows:
“correlate a representation of the first bony anatomical object in the tomographic or topographic image data to a representation of the first and second bony anatomical objects in the first image data;”
In claims 1 and 13, the above recited steps can be practically performed in the human mind to perform the steps. Specifically correlating a representation of the first anatomical object in the second image data to a representation of the first and second anatomical objects in the first image data recites a mental process that can be practically performed in the human mind. The step involves recognizing, comparing and associating visual information between two sets of images to determine their correspondence. A human, upon viewing two medical images, such as an MRI scan taken at one time and an ultrasound image take layer, can mentally identify anatomical features, observe how they align or differ, and conceptually determine how an anatomical structure has moved relative to another over time. This act of mentally mapping one image to another and recognizing the relationship between features constitutes observation, evaluation, and correlation of visual data, activities that fall squarely within the core of human mental activities. The step only relies on cognitive judgment to discern relationships between visual representation, which is a form of mental comparison and reasoning that can be carried out entirely in the human mind. There is nothing recited in the claim to suggest an undue level of complexity in how the correlating is done.
Prong Two: Claims 1 and 13 do not include additional elements that integrate the mental process into a practical application.
This judicial exception is not integrated into a practical application. In particular, the claims recites (1) additional steps of “receiving first image data captured at a first time and comprising a first bony anatomical object and a second bony anatomical object anatomically connected to the first bony anatomical object, the first image data being captured using a first imaging modality; receiving second image data captured at a second time and comprising at least the first bony anatomical object, the second image data comprising tomographic or topographic image data of the first bony anatomical object captured using ultrasound imaging;”-(claim 1), “a communication interface; an ultrasound imaging device; at least one processor; and a memory storing instructions for execution by the at least one processor that, when executed, cause the at least one processor to: receive, via the communication interface, first image data captured at a first time and comprising a first bony anatomical object and a second bony anatomical object anatomically connected to the first bony anatomical object, the first image data being captured using a first imaging modality different than the ultrasound imaging device; obtain, using the ultrasound imaging device, second image data at a second time that comprises at least the first bony anatomical object, the second image data comprising tomographic or topographic image data of the first bony anatomical object;”-(claim 2); and (2) further an addition step of “updating a digital model of the first and second bony anatomical objects based on the correlation to reflect movement of the first bony anatomical object relative to the second bony anatomical object that occurred between the first time and the second time; updating a surgical plan based on the updated digital model; and generating navigation guidance to control a robot to execute the updated surgical plan.”-(claim 1), “update a digital model of the first and second bony anatomical objects based on the correlation to reflect movement of the first bony anatomical object relative to the second bony anatomical object that occurred between the first time and the second time; update a surgical plan based on the updated digital model; and generate and output navigation guidance that controls a robot to execute the updated surgical plan.”-(claim 2).
The steps in (1) represent merely data gathering or pre-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality with conventionally used tools (see below Step IIB for further details). Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)).
Regarding the processor language written at such a high level of generality of structural limitations, the processor language amounts to a generic computer component with mere instructions to implement the abstract idea on a computer.
The steps in (2) represent merely data gathering or insignificant post-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality with conventionally used tools (see below Step IIB for further details). Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)). Additionally, these steps in (2) do not amount to a practical application of the abstract idea in a manner that provides a meaningful technological improvement. While these steps involve the use of a digital model and surgical plan, these are merely applying the abstract mental correlation to a generic computer implantation without integrating the abstract idea into a specific technological solution. The step does not recite any specific algorithm or steps-by-steps that improves how digital models are generated, stored, or manipulated. Instead, the updating steps simply instructs a computer to perform conventional data manipulation, adjusting a digital representation and revising a plan, “based on” previously determined correlation. At its core, it’s a generic post-solution activity, including the updating of stored information and the adjustment of a plan, representing nothing more than routine computer functions used as a tool to implement the underlying mental process. Moreover, the claimed operation does not improve the functioning of a computer or any imaging technology “itself”, nor does it solve a technological problem rooted in computer or imaging systems. Rather the steps use standard digital processing. Therefore, the recited step fails to meaningfully limit the abstract idea and does not transform it into a practical application that yields a technological improvement or solution.
As a whole, the additional elements merely serve to gather and feed information to the abstract idea and to output a notification based on the abstract idea, while generically implementing it on conventionally used tools. There is no practical application because the abstract idea is not applied, relied on, or used in a meaningful way. No improvement to the technology is evident, and the information is not outputted in any way such that a practical benefit is realized. Therefore, the additional elements, alone or in combination, do not integrate the abstract idea into a practical application.
Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Further, there is no evidence of record that would support the assertion that this step is an improvement to a computer or technological solution to a technological problem. Ultimately, the Applicant’s describe improvement in the process of using imaging techniques, but this is not an improvement in the function of a computer or other technology (See MPEP 2106.05(a)(ii); “the court determined that the claimed user interface simply provided a trader with more information to facilitate market trades, which improved the business process of market trading but did not improve computers or technology”; See MPEP 2106.04(d)(1); 2106.05(a); and 2106.05(f)). The claims are directed to the abstract idea. Also, there does not appear to be any particular structure or machine, treatment or prophylaxis, transformation, or any other meaningful application that would render the claim eligible at step 2A, prong 2.
Step 2B of the subject matter eligibility test (see MPEP 2106.05).
Claims (1, 13) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above, the claims recite additional steps of receiving a first and second image captured at different times. These steps represents mere data gathering, data outputting or pre/post/extra-solution activities that are necessary for use of the recited judicial exception and are recited at a high level of generality. Furthermore, as discussed above, limitations with respect to the processor languages/terms, respectively, amount to mere instructions to implement the abstract idea on a computer. As discussed with respect to Step 2A Prong Two, the additional elements in the claims amount to no more than insignificant extra solution activity and mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B and does not provide an inventive concept. The data gathering steps that were considered insignificant extra-solution activity in Step 2A Prong Two, have been re-evaluated in Step 2B and determined to be well-understood, routine, conventional activity in the field.
As in evidence by Garibaldi (US 2009/0105579 A1) discloses,
¶0026, ‘While registration methods of images from a similar modality, such as x-ray fluoroscopy projections to CT image data or ultrasound frame to frame have been known in the art for more than a decade, more recently specific techniques such a mutual image information have been proposed to effect co-registration of images acquired by different modalities.’
For these reasons, there is no inventive concept. The claim is not patent eligible. Even when viewed as a whole, nothing in the claim adds significantly more to the abstract idea.
Dependent Claims
Claim 3 describing changing a predetermined tool trajectory based on the updated digital model also occurs after the correlation process and uses the updated model to make adjustments. Such post-processing of data is not a transformation of the abstract idea into a practical application.
Claim 4 describing measuring an anatomical parameter based on the updated digital model; wherein, the measurement takes place after the correlation step, using the correlated data to perform additional analysis. This post-correlation activity is a routine data-processing step.
Claim 5 describing comparing the measured anatomical parameter to a target anatomical parameter. The comparison takes place after measurement, which itself is post-solution. The act of comparing values after correlating data is an additional mental step, which is an abstract idea.
Claim 7 describing updating the surgical plan based on the measured anatomical parameter. This step is a post-solution activity as it takes place after measuring and correlating the anatomical data. Updating a plan as a result of correlated data does not amount to significantly more.
Claim 8 describing including at least one surgical task in the updated surgical plan. This is an additional post-solution step where the surgical plan is modified after processing the correlated data, which is routine and conventional.
Claim 10 and Claim 16 describing displaying the updated digital model on a user interface. Merely displaying data after it has been processed is a post-solution activity and does not enhance the underlying technology or the correlation process itself.
Claim 17 describing calculating an anatomical angle based on the updated digital model and displaying it. Both the calculation and display are post-solution activities as they take place after the correlation step, presenting the result without adding a practical application.
The following dependent claims merely further define extra-solution activities, specify data types, or relate to conventional data gathering processes, which do not amount to significantly more to the abstract idea for similar reasons stated above:
Claim 9 species the timing of performing the abstract idea (correlating data) in real-time. Implementing the abstract idea on a computer merely implements correlation faster, but does not provide significantly more to the abstract idea.
Claim 15 species the correlation and updating happens continuously during data collection, which an implementation detail that does not amount to significantly more. Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)).
Claim 18 specifies the specific anatomical object, which does not transform the abstract idea into a practical application. It’s merely the data gathered in the image data. Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)).
Claim 12 and Claim 19 specify merely the nature of the data gathered. This is a data gathering step, which does not amount to significantly more than the abstract idea because it merely specifies the type of data. Data gathering and mere instructions to implement an abstract idea on a computer do not integrate a judicial exception into a practical application (MPEP 2106.05 (f and g)).
Taken alone and in combination, the additional elements do not integrate the judicial exception into a practical application at least because the abstract idea is not applied, relied on, or used in a meaningful way. They also do not add anything significantly more than the abstract idea. Their collective functions merely provide computer/electronic implementation and processing, and no additional elements beyond those of the abstract idea. Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements individually. There is no indication that the combination of elements improves the functioning of a computer, output device, improves technology other than the technical field of the claimed invention, etc. Therefore, the claims are rejected as being directed to non-statutory subject matter.
It is noted that claim 2 is eligible. Specifically, with regards to positively controlling the robot to execute the updated surgical plan based on the navigations (i.e., control[ing] a robot to execute the updated surgical plan based on navigation guidance); and therefore, as a whole thus amounts to significantly more than the exception itself.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-10, 12-13, 15-16, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al (US 2013/0060146 A1) in view of Moctezuma de la Barrera et al (US20190069882A1).
Claim 1: Yang discloses, A segmental tracking method, comprising:
-Yang teaches a method of segmental tracking achieved by processing pre-operative image data to define individual segments and subsequently registering intraoperative image data to these segments to define their updated position and orientation during surgery, ¶0101, ¶0115-0116, ¶0118-0119, ¶0121-0124, ¶0132-0133, ¶0150-0154,
receiving first image data captured at a first time and comprising a first bony anatomical object and a second bony anatomical object anatomically connected to the first bony anatomical object, the first image data being captured using a first imaging modality;
-Yang teaches that the system utilizes pre-operative image data (i.e., captured at a first time) associated with a patient stored on a storage medium, ¶0010, ¶0012, ¶0100, ¶0103, using a first imaging modality, such as CT or MRI, ¶Abstract, ¶0116. The pre-operative image data (i.e., CT image data) captures an anatomical region of interest such as the entire spine. This data is a 3D image dataset, ¶0115-0116, and is segmented into a rigid structure segments that have rotational and translational degrees of freedom with respect to another, ¶0118-0119. Specifically, Yang discloses, the first image data at a first time, by obtaining the 3D image dataset preoperatively to locate an anatomical region of interest. This data constitutes as preoperative image data associated with a patient.
-Yang highlights on the fact that the focus of the disclosure is on the spine, which is segmented into individual vertebrae. The CT scan of the patient’s spine, ¶0117 is processed and segmented into individual vertebrae, ¶0119, ¶0140-0141. The output includes a given number, N, of preoperative surface image data segments, (CT_MESH_OB_1 . . . CT_MESH_OB_N) (where N is the number of elements segmented from the structure and in this example, the number of vertebrae), ¶0119, ¶0153. Thus, the presence of a first and second bony anatomical object are anatomically connected (i.e., as in a spinal column). Thus, the first and second bony anatomical objects are represented (e.g., vertebra 1 and vertebra 2).
-The preoperative image data is acquired using modalities such as CT, MRI, PET or ultrasound imaging techniques, ¶0009, ¶0099-0100, ¶0115-0116, ¶0117, (i.e., preferably CT - a first imaging modality).
receiving second image data captured at a second time and comprising at least the first bony anatomical object, the second image data comprising tomographic or topographic image data of the first bony anatomical object captured using a second imaging modality;
-Yang discloses the system utilized inter-operative data (i.e., captured at a second time) associated with the patient. The topological data (surface image data) is obtained intraoperatively, ¶0009-0010, ¶0102, ¶0114. This data can be captured continuously or on demand during the procedure, ¶0122-0123, ¶0185-0186. The intraoperative data is obtained by optically scanning the exposed surface of the patient, ¶0010-0012, ¶0122-0123. The surface belongs to a rigid structure of interest, such as the vertebra, ¶0075, ¶0090, ¶0107, which correspond to one of the anatomical objects defined in the pre-operative plan, ¶0102, ¶0113, ¶0121-0123. The second image data is backscattered radiation surface topology data obtained using devices that employ techniques such as structured light illumination, laser triangulation, etc, ¶0071, ¶0090-0091, ¶0095, ¶0113, ¶0121-0123. The second image data is captured using a second imaging modality, such as a surface topology imaging device (e.g., structured light or laser range scanning), ¶0107, ¶0139. This interoperative scan captures the exposed bony object (e.g., the targeted vertebra), ¶0107, ¶0139. The system of Yang, then registers the individual segments from the first dataset to the topological data from the second dataset to track the position and orientation of the anatomy, ¶0123, ¶0151.
correlating a representation of the first bony anatomical object in the tomographic or topographic image data to a representation of the first and second bony anatomical objects in the first image data;
-Yang’s system core function is to correlate or register the intra-operative data to the pre-operative data. This process is referred to as registration, which aligns the datasets from different coordinates and/or different techniques, ¶0010-0012, ¶0062, ¶0131. Specifically, the intraoperative backscatter radiation topology data (second image data), which represents the exposed surgical structure (the first anatomical object), ¶0121-0123, is registered to the segmented preoperative image data (first image data), ¶0121-0123.
updating a digital model of the first and second bony anatomical objects based on the correlation to reflect movement of the first bony anatomical object relative to the second bony anatomical object that occurred between the first time and the second time;
-Yang teaches updating a digital model to reflect relative movement between anatomical objects and subsequently updating a surgical plan based on this update model. Yang teaches the spine is segmented into the rigid structure segments that have rotational and translation degree of freedom with respect to each other (e.g., individual vertebrae), This means the segments are treated as moveable relative to each other within the overall digital model derived from the pre-operative image data, ¶0118-0119. During surgery, the position of a vertebrae (first anatomical object) can shift due to surgical intervention (e.g., pressure applied to the vertebrae) or movement of the subject, causes its position to be displaced from the operatively determined position, ¶0108, ¶0140.
-The surgical guidance controller registers the intraoperative topological image data, which captured the exposed surface of the moving structure to the segmented pre-operative image data, ¶0010-0012, ¶0075, ¶0150-0151. This registration generates a transformation matrix (including translation and rotation identities, such as roll, pitch, and yaw) for each of the segmented structures, ¶0132-0133, ¶0154. These derived transformation matrices are then applied to the combined segmented structures to update and to match the structures to the current intraoperative geometry, ¶0132-0133, ¶0154. This process effectively updates the digital model of the structure (e.g., vertebrae 23) from its preoperative position to its intraoperative position, ¶0109, ¶0140. The mechanism of this registration is performed individually on segments reflected movement, and is an example of motion correction, ¶0105-0106, ¶0132-0134.
updating a surgical plan based on the updated digital model; and
-Yang teaches the transformation matrix determined during the registration process enables coordinate remapping for updating a combined surgical plan, ¶0155. And the method can be provided continuously or discretely during the procedure, ¶0110, ¶0185-0186.
generating navigation guidance to control a robot to execute the updated surgical plan.
-Yang teaches, “it is recognized that such guidance feedback can also be provided to, and utilized by, other persons or systems, such as autonomous or semi-autonomous surgical robotic systems for the automated guidance of such surgical robotic systems.”-¶0227.
Yang fails to disclose that the second imaging modality used is ultrasound imaging.
However, Moctezuma de la Barrera in the context of ultrasound bone registration with co-modality imaging of the object (bony anatomical object), discloses, that the second imaging modality used is ultrasound imaging (¶0033, ‘the ultrasound device 21 may be moved to a position adjacent to and in contact with the patient such that the ultrasound device 21 detects the femur F or the tibia T of the patient. A sweep of the femur F or the tibia T may be performed.’)
- Moctezuma de la Barrera specifically teaches that the method and system utilizes this data to register intraoperative ultrasound imaging (second data) to the pre-operative co-modality imaging (i.e., CT), ¶0029. Ultrasound imaging is specifically used to detect the surface of the object (i.e., bone surface). The system process the ultrasound to reconstruct larger are of the bone surface and extracts a point cloud of the object surface, ¶Abstract, ¶0009, ¶0042. This is indicative of surface data (topographic). The ultrasound imaging is generated by propagating waves along scanlines to generate frames. This workflow involves analyzing these frames (i.e., first and second steered frames) to segment the anatomy. This is indicative of tomographic (i.e., slice/frame) data, ¶0030, ¶0036.
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the second imaging modality of Yang such that it utilizes ultrasound imaging as taught by Moctezuma de la Barrera. The motivation to do this yields predictable results such as improving workflow to register ultrasound imaging to a co-modality imaging that overcomes the disadvantageous of insufficient detection coverage of the visible bone surface, or the occurrence of false positive detections, as suggested by Moctezuma de la Barrera, ¶0002-0003.
Claim 2: Modified Yang discloses all the elements above in claim 1, Yang discloses, further comprising controlling the robot to execute the updated surgical plan based on the navigation guidance. (¶0102, ‘generating useful information to guide surgery in the form of co-registered images, and displaying or otherwise communicating this surgical guidance information to a surgeon or operator.’, ¶0227, ‘can also be provided to, and utilized by, other persons or systems, such as autonomous or semi-autonomous surgical robotic systems for the automated guidance of such surgical robotic systems.’, ¶0229, ‘Using the present method, this would allow precise placement of components onto the base structure via the robotic arm’, ¶0230, ‘Such portability may be suitable to be fitted onto a mobile robot, to perform object identification to navigate a terrain, and perform object manipulation through a tracked robotic arm.’)
Claim 3: Modified Yang discloses all the elements above in claim 1, Yang discloses, wherein updating the surgical plan comprises changing a predetermined tool trajectory based on the updated digital model. (¶0103, ‘During surgery, the preoperative plan is updated to reflect the intraoperative geometry of a patient's spine with the optimal trajectory and a cone of acceptance, described below, as a guide to assist a surgeon during pedicle screw insertion.’; ¶0104, ‘The cone of acceptance 25 is defined by a range of trajectories relative to the vertebrae 23, along which the pedicle screw can be securely implanted into the pedicle canal without damaging the spinal cord 24, sparing the surrounding peripheral nerves and blood vessels, and does not protrude out of the bone. The range of available trajectories has limited lateral and angular freedom in light of the narrow middle section of the pedicle canal. Taken together, the trajectories collectively define a frustum conical shape with a wider end at an entry surface of the vertebral arch.’; ¶0105, ‘The range 28 of available trajectories relative to a vertebra 23 is dependent on: (1) the dimensions of the vertebra; (2) the orientation of the vertebra 23; and (3) the size of the pedicle screw. The cone of acceptance 25 incorporates the narrowest section of the pedicle canal, along a principle axis 26, for defining the optimal surgical implantation site for safe and secure pedicle screw insertion.’; ¶0109, ‘Referring now to FIG. 7, an example shift in the position of the vertebra 23 from a preoperative position 23 to an intraoperative position 23′ is illustrated. The vertebrae 23 preoperative position is used to develop the surgical plan. In developing the surgical plan, the principle axis 26 is determined to ensure avoidance of the spinal cord 24. The preoperative positions of the structures are indicated with solid lines. During surgery, positions can shift due to, for example, surgical intervention and change in subject position, as noted above. The updated locations of the target vertebrae 23′, principle axis 26′, and spinal cord 24′ are determined by the system 100 and outputted on the display 4. Accordingly, system 100 provides a dynamically updated surgical plan that is registered to the patient anatomy in real-time.’)
Claim 4: Modified Yang discloses all the elements above in claim 1, Yang discloses, further comprising measuring an anatomical parameter based on the updated digital model. (FIG. 7, ¶109, ‘In developing the surgical plan, the principle axis 26 is determined to ensure avoidance of the spinal cord 24’)
-The principal axis is indeed considered an anatomical parameter because it is specifically calculated and referenced in relation to the anatomical structure of the vertebrae.
Claim 5: Modified Yang discloses all the elements above in claim 4, Yang discloses, further comprising comparing the measured anatomical parameter to a target anatomical parameter. (¶0105, ‘The cone of acceptance 25 incorporates the narrowest section of the pedicle canal, along a principle axis 26, for defining the optimal surgical implantation site for safe and secure pedicle screw insertion.’; ¶0111, ‘the surgical plan may include surgical criteria that can be displayed on the co-registered image. Examples of criteria that the surgeon may input into system 100, as part of a surgical plan, include, but are not limited to: the accepted accuracy of screw placement; the coordinates of the point of entry into the vertebra 23 that define the principle axis 26; the accepted angle of screw placement; and the depth of screw placement.’; ¶0112, ‘Referring to FIG. 4, these criteria can be used to calculate a plane of smallest diameter (for example the narrowest section of the pedicle canal), through which the principle axis 26 runs centrally. Due to the spatial registration between the surface of interest and the projector, the calculated plane 27 can then be projected onto the surface of the vertebrae via the projector to provide a desired solution for pedicle screw placement 28. The cone of acceptance 25 coordinates can then be overlaid onto the vertebrae and provided on the display 4.’)
Claim 6: Modified Yang discloses all the elements above in claim 5, Yang discloses, wherein the surgical plan comprises the target anatomical parameter. (¶0106, ‘The cone of acceptance 25 is typically determined as part of a preoperative surgical plan. Methods for incorporating the surgical plan into the intraoperative guidance system are addressed below. The system 100 monitors the orientation of the vertebra 23, which changes during surgery by a number of means, such as during drilling of the vertebra and depression of the spine by the surgeons, and generates guidance feedback, such as an update and display of the cone of acceptance 25,’; ¶0109, ‘system 100 provides a dynamically updated surgical plan that is registered to the patient anatomy in real-time.’)
Claim 7: Modified Yang discloses all the elements above in claim 5, Yang discloses, wherein updating the surgical plan comprises updating the surgical plan based on the measured anatomical parameter to yield an updated surgical plan. (¶0109, ‘During surgery, positions can shift due to, for example, surgical intervention and change in subject position, as noted above. The updated locations of the target vertebrae 23′, principle axis 26′, and spinal cord 24′ are determined by the system 100 and outputted on the display 4. Accordingly, system 100 provides a dynamically updated surgical plan that is registered to the patient anatomy in real-time.’)
Claim 8: Modified Yang discloses all the elements above in claim 7, Yang discloses, wherein the updated surgical plan comprises at least one surgical task for achieving the target anatomical parameter given the measured anatomical parameter. (¶0112, ‘Referring to FIG. 4, these criteria can be used to calculate a plane of smallest diameter (for example the narrowest section of the pedicle canal), through which the principle axis 26 runs centrally. Due to the spatial registration between the surface of interest and the projector, the calculated plane 27 can then be projected onto the surface of the vertebrae via the projector to provide a desired solution for pedicle screw placement 28. The cone of acceptance 25 coordinates can then be overlaid onto the vertebrae and provided on the display 4. The system 100 can remain in “standby mode” until the structure of interest is surgically exposed.’)
-The updated surgical plan comprises the at least one surgical plan, the provided desired solution for pedicle screw placement 28 for achieving the target anatomical parameter given the measured anatomical parameter.
Claim 9: Modified Yang discloses all the elements above in claim 1, Yang discloses, wherein the second image data comprises a data stream of the tomographic or topographic image data, and the correlating and the updating the digital model occur in real-time or near real-time. (¶0109, ‘During surgery, positions can shift due to, for example, surgical intervention and change in subject position, as noted above. The updated locations of the target vertebrae 23′, principle axis 26′, and spinal cord 24′ are determined by the system 100 and outputted on the display 4. Accordingly, system 100 provides a dynamically updated surgical plan that is registered to the patient anatomy in real-time.’, see also ¶Abstract, ¶0010-0011, ¶0090, ¶0100, ¶0114 – regarding the topological image data and surface topology image data that is consistently described with respect to the second image data.)
Claim 10: Modified Yang discloses all the elements above in claim 1, Yang discloses, further comprising displaying the updated digital model on a user interface. (FIG. 6a-6b; ¶0108, ‘An updated position of the vertebra 23′ can be determined and outputted by the system 100 on the display 4.’; ¶0110, ‘Intraoperative image updates of the vertebrae 23 can be provided continuously or discretely according to input into the system 100 by, for example, a surgeon. In the situation where updates are provided continuously, the system 100 can operate autonomously obviating the need for the surgeon to input any additional data. In the situation where updates are provided discretely, for example updates provided at single time points, the surgeon can request an image data update by inputting a request into the system 100. The updated plan is provided on the display device 4 on command without any other user interface. The updated image data and related updated intraoperative surgical plan enable a surgeon to accurately implant, for example, a pedicle screw into a vertebra 23.’; ¶0111, ‘the surgical plan may include surgical criteria that can be displayed on the co-registered image. Examples of criteria that the surgeon may input into system 100, as part of a surgical plan, include, but are not limited to: the accepted accuracy of screw placement; the coordinates of the point of entry into the vertebra 23 that define the principle axis 26; the accepted angle of screw placement; and the depth of screw placement.’
Claim 12: Modified Yang discloses all the elements above in claim 1, Yang discloses, wherein correlating the representation of the first bony anatomical object in the second image data to the representation of the first and second bony anatomical objects in the first image data is performed without fiducial markers being on the first and second bony anatomical object.
(¶Abstract, ‘three-dimensional image data associated with an object or patient is registered to topological image data obtained using a surface topology imaging device.’; ¶0114, ‘Intraoperative topology imaging data is then acquired in step 51 and registered to the pre-operative image data for providing guidance feedback to guide the surgical procedure intraoperatively.’)
See also ¶0087, ¶0108, ¶0113, ¶0124-0125 – regarding fiducial free guidance of the system of Yang. In that it is the advantage of the present system that fiducial marks are not required for surgical guidance, “it is an advantage of the present system that fiducial markers are not required for surgical guidance.”=¶0087.
Claim 13: A segmental tracking system, comprising:
-Yang teaches a method of segmental tracking achieved by processing pre-operative image data to define individual segments and subsequently registering intraoperative image data to these segments to define their updated position and orientation during surgery, ¶0101, ¶0115-0116, ¶0118-0119, ¶0121-0124, ¶0132-0133, ¶0150-0154,
a communication interface; (workstation 7; ¶0085, ‘User workstation 7 may consist of display 4, such as a high definition monitor, the surgical guidance controller 3, and user interface 5, such as a keyboard, for inputting instructions into the system 100’) (¶0099, ‘Image dataset provided to system 100 can include any of the following non-limiting examples: preoperative 3D image data of a surgical structure of interest, such as the spine, in a subject acquired, for example, using any one of PET, CT, MRI, or ultrasound imaging techniques; a preoperative surgical plan developed by a clinical practitioner (for example, a surgeon), and a surface topology image dataset, optionally including texture data, of the rigid surgical structure of interest.’)
an second imaging device; (¶0009, ‘The surface topology imaging device may be rigidly attached to an optical position measurement system’; ¶0114, ‘Intraoperative topology imaging data is then acquired in step 51 and registered to the pre-operative image data for providing guidance feedback to guide the surgical procedure intraoperatively.’)
at least one processor; and (surgical guidance controller 3)
a memory storing instructions for execution by the at least one processor that, when executed, cause the at least one processor to: (¶0085, ‘memory storage device 2 to carry out the methods described herein’; ¶0074, ‘Surgical guidance controller 3 can be, for example, a processing unit and associated memory containing one or more computer programs to control the operation of the system, the processing unit in communication with a user interface unit 5 and the display 4. In one example, surgical guidance controller 3 may be a computing system such as a personal computer or other computing device, for example in the form of a computer workstation, incorporating a hardware processor and memory, where computations are performed by the processor in accordance with computer programs stored in the memory to carry out the methods described herein. For example, the processor can be a central processing unit or a combination of a central processing unit and a graphical processing unit’)
receive, via the communication interface, first image data captured at a first time and comprising a first bony anatomical object and a second bony anatomical object anatomically connected to the first bony anatomical object, the first image data being captured using a first imaging modality different than the second image imaging device;
-Yang teaches that the system utilizes pre-operative image data (i.e., captured at a first time) associated with a patient stored on a storage medium, ¶0010, ¶0012, ¶0100, ¶0103, using a first imaging modality, such as CT or MRI, ¶Abstract, ¶0116. The pre-operative image data (i.e., CT image data) captures an anatomical region of interest such as the entire spine. This data is a 3D image dataset, ¶0115-0116, and is segmented into a rigid structure segments that have rotational and translational degrees of freedom with respect to another, ¶0118-0119. Specifically, Yang discloses, the first image data at a first time, by obtaining the 3D image dataset preoperatively to locate an anatomical region of interest. This data constitutes as preoperative image data associated with a patient.
-Yang highlights on the fact that the focus of the disclosure is on the spine, which is segmented into individual vertebrae. The CT scan of the patient’s spine, ¶0117 is processed and segmented into individual vertebrae, ¶0119, ¶0140-0141. The output includes a given number, N, of preoperative surface image data segments, (CT_MESH_OB_1 . . . CT_MESH_OB_N) (where N is the number of elements segmented from the structure and in this example, the number of vertebrae), ¶0119, ¶0153. Thus, the presence of a first and second bony anatomical object are anatomically connected (i.e., as in a spinal column). Thus, the first and second bony anatomical objects are represented (e.g., vertebra 1 and vertebra 2).
-The preoperative image data is acquired using modalities such as CT, MRI, PET or ultrasound imaging techniques, ¶0009, ¶0099-0100, ¶0115-0116, ¶0117, (i.e., preferably CT - a first imaging modality).
obtain, using the second imaging device, second image data at a second time that comprises at least the first bony anatomical object, the second image data comprising tomographic or topographic image data of the first bony anatomical object;
-Yang discloses the system utilized inter-operative data (i.e., captured at a second time) associated with the patient. The topological data (surface image data) is obtained intraoperatively, ¶0009-0010, ¶0102, ¶0114. This data can be captured continuously or on demand during the procedure, ¶0122-0123, ¶0185-0186. The intraoperative data is obtained by optically scanning the exposed surface of the patient, ¶0010-0012, ¶0122-0123. The surface belongs to a rigid structure of interest, such as the vertebra, ¶0075, ¶0090, ¶0107, which correspond to one of the anatomical objects defined in the pre-operative plan, ¶0102, ¶0113, ¶0121-0123. The second image data is backscattered radiation surface topology data obtained using devices that employ techniques such as structured light illumination, laser triangulation, etc, ¶0071, ¶0090-0091, ¶0095, ¶0113, ¶0121-0123. The second image data is captured using a second imaging modality, such as a surface topology imaging device (e.g., structured light or laser range scanning), ¶0107, ¶0139. This interoperative scan captures the exposed bony object (e.g., the targeted vertebra), ¶0107, ¶0139. The system of Yang, then registers the individual segments from the first dataset to the topological data from the second dataset to track the position and orientation of the anatomy, ¶0123, ¶0151.
correlate a representation of the first bony anatomical object in the tomographic or topographic image data to a representation of the first and second bony anatomical objects in the first image data;
-Yang’s system core function is to correlate or register the intra-operative data to the pre-operative data. This process is referred to as registration, which aligns the datasets from different coordinates and/or different techniques, ¶0010-0012, ¶0062, ¶0131. Specifically, the intraoperative backscatter radiation topology data (second image data), which represents the exposed surgical structure (the first anatomical object), ¶0121-0123, is registered to the segmented preoperative image data (first image data), ¶0121-0123.
update a digital model of the first and second bony anatomical objects based on the correlation to reflect movement of the first bony anatomical object relative to the second bony anatomical object that occurred between the first time and the second time;
-Yang teaches updating a digital model to reflect relative movement between anatomical objects and subsequently updating a surgical plan based on this update model. Yang teaches the spine is segmented into the rigid structure segments that have rotational and translation degree of freedom with respect to each other (e.g., individual vertebrae), This means the segments are treated as moveable relative to each other within the overall digital model derived from the pre-operative image data, ¶0118-0119. During surgery, the position of a vertebrae (first anatomical object) can shift due to surgical intervention (e.g., pressure applied to the vertebrae) or movement of the subject, causes its position to be displaced from the operatively determined position, ¶0108, ¶0140.
-The surgical guidance controller registers the intraoperative topological image data, which captured the exposed surface of the moving structure to the segmented pre-operative image data, ¶0010-0012, ¶0075, ¶0150-0151. This registration generates a transformation matrix (including translation and rotation identities, such as roll, pitch, and yaw) for each of the segmented structures, ¶0132-0133, ¶0154. These derived transformation matrices are then applied to the combined segmented structures to update and to match the structures to the current intraoperative geometry, ¶0132-0133, ¶0154. This process effectively updates the digital model of the structure (e.g., vertebrae 23) from its preoperative position to its intraoperative position, ¶0109, ¶0140. The mechanism of this registration is performed individually on segments reflected movement, and is an example of motion correction, ¶0105-0106, ¶0132-0134.
update a surgical plan based on the updated digital model; and
-Yang teaches the transformation matrix determined during the registration process enables coordinate remapping for updating a combined surgical plan, ¶0155. And the method can be provided continuously or discretely during the procedure, ¶0110, ¶0185-0186.
generate and output navigation guidance that controls a robot to execute the updated surgical plan.
-Yang teaches, “it is recognized that such guidance feedback can also be provided to, and utilized by, other persons or systems, such as autonomous or semi-autonomous surgical robotic systems for the automated guidance of such surgical robotic systems.”-¶0227.
Yang fails to disclose that the second image device, is an ultrasound imaging device.
However, Moctezuma de la Barrera in the context of ultrasound bone registration with co-modality imaging of the object (bony anatomical object), discloses, that the second imaging modality used is an ultrasound imaging device (¶0033, ‘the ultrasound device 21 may be moved to a position adjacent to and in contact with the patient such that the ultrasound device 21 detects the femur F or the tibia T of the patient. A sweep of the femur F or the tibia T may be performed.’)
- Moctezuma de la Barrera specifically teaches that the method and system utilizes this data to register intraoperative ultrasound imaging (second data) to the pre-operative co-modality imaging (i.e., CT), ¶0029. Ultrasound imaging is specifically used to detect the surface of the object (i.e., bone surface). The system process the ultrasound to reconstruct larger are of the bone surface and extracts a point cloud of the object surface, ¶Abstract, ¶0009, ¶0042. This is indicative of surface data (topographic). The ultrasound imaging is generated by propagating waves along scanlines to generate frames. This workflow involves analyzing these frames (i.e., first and second steered frames) to segment the anatomy. This is indicative of tomographic (i.e., slice/frame) data, ¶0030, ¶0036.
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the second imaging modality of Yang such that it utilizes an ultrasound imaging device as taught by Moctezuma de la Barrera. The motivation to do this yields predictable results such as improving workflow to register ultrasound imaging to a co-modality imaging that overcomes the disadvantageous of insufficient detection coverage of the visible bone surface, or the occurrence of false positive detections, as suggested by Moctezuma de la Barrera, ¶0002-0003.
Claim 15: Modified Yang discloses all the elements above in claim 13, Yang discloses, wherein the second image data comprises a data stream of the tomographic or topographic image data, the correlating occurs continuously during receipt of the data stream, and the updating the digital model occurs continuously during the correlating. (¶0102, ‘An example implementation of surgical guidance through the use of backscattered radiation topology imaging can include, for example, acquiring preoperative imaging data and developing a surgical plan, performing intraoperative imaging, and, in combination with the preoperative image data, generating useful information to guide surgery in the form of co-registered images, and displaying or otherwise communicating this surgical guidance information to a surgeon or operator. A preoperative plan can be developed, for example, by a clinician using the preoperative image data, and made available for use in the system. This example implementation enables repetition of intraoperative imaging and generating guidance feedback.’; ¶0103, ‘During surgery, the preoperative plan is updated to reflect the intraoperative geometry of a patient's spine with the optimal trajectory and a cone of acceptance, described below, as a guide to assist a surgeon during pedicle screw insertion.’; ¶0109, ‘During surgery, positions can shift due to, for example, surgical intervention and change in subject position, as noted above. The updated locations of the target vertebrae 23′, principle axis 26′, and spinal cord 24′ are determined by the system 100 and outputted on the display 4. Accordingly, system 100 provides a dynamically updated surgical plan that is registered to the patient anatomy in real-time.’; ¶0110, ‘Intraoperative image updates of the vertebrae 23 can be provided continuously or discretely according to input into the system 100 by, for example, a surgeon. In the situation where updates are provided continuously, the system 100 can operate autonomously obviating the need for the surgeon to input any additional data. In the situation where updates are provided discretely, for example updates provided at single time points, the surgeon can request an image data update by inputting a request into the system 100. The updated plan is provided on the display device 4 on command without any other user interface. The updated image data and related updated intraoperative surgical plan enable a surgeon to accurately implant, for example, a pedicle screw into a vertebra 23.’; ¶0185, ‘Continuous updating of surgical guidance feedback may occur autonomously, such that upon completion of one update, another update automatically commences. In such an embodiment, the user is not required to manually input a request for an update from the system 100. Accordingly, the use of system 100 may be advantageous in reducing surgical procedure time, due to real time updates of the surgical structure of interest,’)
see also ¶Abstract, ¶0010-0011, ¶0090, ¶0100, ¶0114 – regarding the topological image data and surface topology image data that is consistently described with respect to the second image data.)
Claim 16: Modified Yang discloses all the elements above in claim 13, Yang discloses: further comprising a user interface (display 4), and wherein the memory stores additional instructions for execution by the at least one processor that, when executed further cause the at least one processor to: display the updated digital model on the user interface. (FIG. 6a-6b; ¶0108, ‘An updated position of the vertebra 23′ can be determined and outputted by the system 100 on the display 4.’; ¶0110, ‘Intraoperative image updates of the vertebrae 23 can be provided continuously or discretely according to input into the system 100 by, for example, a surgeon. In the situation where updates are provided continuously, the system 100 can operate autonomously obviating the need for the surgeon to input any additional data. In the situation where updates are provided discretely, for example updates provided at single time points, the surgeon can request an image data update by inputting a request into the system 100. The updated plan is provided on the display device 4 on command without any other user interface. The updated image data and related updated intraoperative surgical plan enable a surgeon to accurately implant, for example, a pedicle screw into a vertebra 23.’; ¶0111, ‘the surgical plan may include surgical criteria that can be displayed on the co-registered image. Examples of criteria that the surgeon may input into system 100, as part of a surgical plan, include, but are not limited to: the accepted accuracy of screw placement; the coordinates of the point of entry into the vertebra 23 that define the principle axis 26; the accepted angle of screw placement; and the depth of screw placement.’
Claim 18: Modified Yang discloses all the elements above in claim 13, Yang discloses: wherein the first and second bony anatomical objects correspond to first and second vertebrae. (FIG. 3a-3c, ¶0101, ‘the structure can be a bone structure, such as a spinal column, a skull, a hip bone, a foot bone, and a patella. For example, FIG. 3( b) is a schematic of a posterior orientation of a segmented spine; FIG. 3( c) is a schematic of a lateral orientation of the segmented spine’)
-Yang teaches that the system utilizes pre-operative image data (i.e., captured at a first time) associated with a patient stored on a storage medium, ¶0010, ¶0012, ¶0100, ¶0103. The pre-operative image data (i.e., CT image data) captures an anatomical region of interest such as the entire spine. This data is a 3D image dataset, ¶0115-0116, and is segmented into a rigid structure segments that have rotational and translational degrees of freedom with respect to another, ¶0118-0119. The CT scan of the patient’s spine, ¶0117 is processed and segmented into individual vertebrae, ¶0119, ¶0140-0141. The output includes a given number, N, of preoperative surface image data segments, (CT_MESH_OB_1 . . . CT_MESH_OB_N) (where N is the number of elements segmented from the structure and in this example, the number of vertebrae), ¶0119, ¶0153. Thus, the first and second anatomical objects are represented (e.g., vertebra 1 and vertebra 2).
Claim 19: Modified Yang discloses all the elements above in claim 13, Yang discloses: wherein correlating the representation of the first bony anatomical object in the tomographic or topographic image data to the representation of the first and second bony anatomical objects in the first image data is performed without fiducial markers being on the first and second bony anatomical objects (¶Abstract, ‘three-dimensional image data associated with an object or patient is registered to topological image data obtained using a surface topology imaging device.’; ¶0114, ‘Intraoperative topology imaging data is then acquired in step 51 and registered to the pre-operative image data for providing guidance feedback to guide the surgical procedure intraoperatively.’)
See also ¶0087, ¶0108, ¶0113, ¶0124-0125 – regarding fiducial free guidance of the system of Yang. In that it is the advantage of the present system that fiducial marks are not required for surgical guidance, “it is an advantage of the present system that fiducial markers are not required for surgical guidance.”=¶0087.
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Yang et al (US 2013/0060146 A1) in view of Moctezuma de la Barrera et al (US20190069882A1), as applied to claim 13, in further view of Matsumoto et al (US 2020/0069243 A1).
Claim 17: Modified Yang discloses all the elements above in claim 13, Yang discloses: wherein the memory stores additional instructions for execution by the at least one processor that, when executed, further cause the at least one processor to:
Yang fails to disclose:
calculate an anatomical angle between the first bony anatomical object and the second bony anatomical object based on the updated digital model; and
display the calculated anatomical angle on a user interface.
However, Matsuomoto et al in the context of spinal-column arrangement estimation discloses, calculate an anatomical angle between the first bony anatomical object and the second bony anatomical object based on the updated digital model; and display the calculated anatomical angle on a user interface. (¶0124, ‘In step S42, the SCAE unit 12 of the CPU 1 estimates spinal-column arrangement from the unknown 3D image acquired by the image acquisition unit 11 using learning data (accumulated data) after machine learning stored in the learning data memory 23. An estimation result of the spinal-column arrangement is stored in the estimation data memory 24.’; ¶0126, ‘In step S44, the image output control unit 14 of the CPU 1 reads the spinal-column arrangement estimated by the SCAE unit 12 and the Cobb angle calculated by the angle calculation unit 13 from the estimation data memory 24, and displays the read spinal-column arrangement and Cobb angle on, for example, a screen of a display corresponding to the output device 4.’)
It would have been obvious to one of ordinary skilled in the art before the effective filing date of the claimed invention to modify the processor of modified Yang such that it calculates an anatomical angle between the first bony anatomical object and the second bony anatomical object based on the updated digital model; and display the calculated anatomical angle on a user interface as taught by Matsumoto. The motivation to do this yields predictable results such as enabling a doctor to accurately diagnose the presence or absence and degree of scoliosis with reference to the Cobb angle, and variation in diagnosis amount doctors may be reduced, ¶0127 of Matsumoto.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Nicholas Robinson whose telephone number is (571)272-9019. The examiner can normally be reached M-F 9:00AM-5:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at (571) 272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.A.R./Examiner, Art Unit 3798
/PASCAL M BUI PHO/Supervisory Patent Examiner, Art Unit 3798