DETAILED ACTION
*Note in the following document:
1. Texts in italic bold format are limitations quoted either directly or conceptually from claims/descriptions disclosed in the instant application.
2. Texts in regular italic format are quoted directly from cited reference or Applicant’s arguments.
3. Texts with underlining are added by the Examiner for emphasis.
4. Texts with
5. Acronym “PHOSITA” stands for “Person Having Ordinary Skill In The Art”.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 30 is objected to because of the following informalities:
Claim 30 recites visualize at least one portion of a surgical site that is occluded with respect to a field of view of an operator via the third imaging representation (last limitation). Suggest replacing “a surgical site” with “the surgical site” since the surgical site is recited in line 4. Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 25-29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claims 25-29 claim a surgical hub. However claims 25-29 do not comprise any component in the hub but a series of function the hub is expected to perform.
Furthermore, Claims 26, 28-29 recite the limitation "the control circuit " and Claim 27 recites the pre-operative data. There is insufficient antecedent basis for these limitations in the claims.
Finally Claim 27 comprises two sentences. According to MPEP 608.01(m), each claim begins with a capital letter and ends with a period. Periods may not be used elsewhere in the claims except for abbreviations. See Fressola v... 608.01(m) Form of Claims [R-10.2019]. Claim 27 jumps from limiting a term pre-operative data to an overlay step which is supposed to be executed by a control circuit. It is unclear what exactly applicant intends to claim. For compact prosecution, the Examiner interprets Claim 27 as if it were two dependent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 20, 22-24 and 30, 32-33 are rejected under 35 U.S.C. 103 as being unpatentable over Esterberg (2017/0027651 A1) in view of Valadez et al. (US 2014/0003700 A1).
Regarding Claim 20, Esterberg disclose a surgical hub for use with a surgical system in a surgical procedure performed in an operating room ([0015]: In at least one embodiment, a method of surgical navigation may include receiving an external three-dimensional model of a surgical site from the viewpoint of a headset, wherein the external three-dimensional model is derived from reflected light), the surgical hub comprising:
a control circuit (Fig.17 and [[0051]: FIG. 17 shows a block diagram of an example computing system by which at least aspects of surgical navigation may be implemented) configured to:
receive a first imaging representation for a surgical site, the first imaging representation comprising a coordinate system (see Fig.8A/B: the images captured by CCD camera A/B are interpreted as the first imaging representation. Also see [0134]: FIG. 8B shows an example data transformation by triangulation to generate an array of polar datapoints having depth and position. Structured Illumination has the advantage that a polar coordinate system and zero reference point are inherent in the imaging projector and can be used to speed analysis of the remaining mesh map. Data analysis is illustrated schematically in FIG. 8B and generates a dataset having position coordinates and elevation coordinates that can be used to draw a wireframe model and describe a body volume that can then be supplemented with data from one or more internal three-dimensional solid model scans);
PNG
media_image1.png
588
426
media_image1.png
Greyscale
receive at least one second imaging representation for the surgical site, the at least one second imaging representation comprising a range of wavelengths outside of a visible light spectrum ([0089]: Once a wireframe map of the surgical field 115 is obtained, anatomical reference points may be identified by image analysis or may be assigned by an operator, and a solid model may be oriented and aligned so that the anatomical features not visible beneath the exterior view of the surgical field 115 may be matched and projected in the virtual view displayed in the eyepiece. The solid model data may be acquired from computerized tomography (CT) scans, magnetic resonance imaging (MRI), or other scans already of record in the patient's digital chart. In a general computational approach, thin slices may be merged to generate a solid model, and the model may be digitally segmented to isolate individual anatomical structures such as bones, vessels, organs and the like. CT or MRI are images comprising a range of wavelengths outside of a visible light);
overlay the first imaging representation and the at least one second imaging representation ([0122]-[0124]: FIG. 6 shows an example process flow 600 for generating a three-dimensional virtual fusion view by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. … In Step 3, the wireframe model and a reference 3D solid model (such as from a CT scan) may then be processed by data fusion processing as known in the art to produce a virtual solid model, termed here a “fusion 3D model” extending from the surface of the surgical field to any internal structures observable in the CT scan, as correctly registered according to the body position and observable anatomical features that were captured in the earlier step …).
PNG
media_image2.png
604
422
media_image2.png
Greyscale
Esterberg discloses fusion had been known to a PHOSITA before the effective filing date of the claimed invention ([0124]: In Step 3, the wireframe model and a reference 3D solid model (such as from a CT scan) may then be processed by data fusion processing as known in the art to produce a virtual solid model, termed here a “fusion 3D model” extending from the surface of the surgical field to any internal structures observable in the CT scan, as correctly registered according to the body position and observable anatomical features that were captured in the earlier step).
But fails to explicitly disclose perform at least one of translation, rotation, scaling, adjustment, and distortion to the at least one second imaging representation via the coordinate system to overlay the first and second images.
However Valadez disclose through translation, rotation, scaling, and other adjustments to fuse two images ([0057]: the system 100 rigidly registers the intra-operative image with the reference image to generate a registered intra-operative image. The rigid registration may be performed by, for instance, a linear transformation (e.g., rotation, scaling, translation, or other affine transformation) of the intra-operative image to align it with the target region in the pre-operative image). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Valdez into that of Esterberg and to include the limitation of perform at least one of translation, rotation, scaling, adjustment, and distortion to the at least one second imaging representation via the coordinate system; and overlay the first imaging representation and the at least one second imaging representation that is performed with at least one of translation, rotation, scaling, adjustment, and distortion with each other, thereby generating a third imaging representation for the surgical site, the third imaging representation comprising a range of wavelengths within the visible light spectrum in order to fuse the two imaging representation using the known technology.
Regarding Claim 22/32, Esterberg further teaches or suggests wherein the control circuit is further configured to: overlay the first imaging representation and the at least one second imaging representation that is performed with at least one of translation, rotation, scaling, adjustment, and distortion with each other in real-time ([0159]: Data may be streamed to the eyepiece for instant reference and may be updated in real time as the surgeon indicates relevant features of interest).
Regarding Claim 23, Esterberg further discloses visualize at least one portion of the surgical site that is occluded with respect to a field of view of an operator via the third imaging representation ([0102]: FIG. 4 shows an example headset view 400 of a surgical field by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. The headset view 400 shows an incision 405 and a virtual CT model 410 of underlying boney structures in anatomical registration).
PNG
media_image3.png
477
438
media_image3.png
Greyscale
Regarding Claim 24/33, Esterberg further discloses visualize at least one portion of the surgical site that is occluded with respect to a field of view of an operator during the surgical procedure via the third imaging representation ([0102]: FIG. 4 shows an example headset view 400 of a surgical field by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. The headset view 400 shows an incision 405 and a virtual CT model 410 of underlying boney structures in anatomical registration).
Regarding Claim 25, Esterberg disclose a surgical hub for use with a surgical system in a surgical procedure performed in an operating room ([0015]: In at least one embodiment, a method of surgical navigation may include receiving an external three-dimensional model of a surgical site from the viewpoint of a headset, wherein the external three-dimensional model is derived from reflected light), the surgical hub comprising:
a control circuit (Fig.17 and [[0051]: FIG. 17 shows a block diagram of an example computing system by which at least aspects of surgical navigation may be implemented) configured to:
receive a first imaging representation for a surgical site, the first imaging representation comprising a coordinate system (see Fig.8A/B and [0134]: FIG. 8B shows an example data transformation by triangulation to generate an array of polar datapoints having depth and position. Structured Illumination has the advantage that a polar coordinate system and zero reference point are inherent in the imaging projector and can be used to speed analysis of the remaining mesh map. Data analysis is illustrated schematically in FIG. 8B and generates a dataset having position coordinates and elevation coordinates that can be used to draw a wireframe model and describe a body volume that can then be supplemented with data from one or more internal three-dimensional solid model scans);
PNG
media_image1.png
588
426
media_image1.png
Greyscale
receive a plurality of second imaging representations each for the surgical site, at least one imaging representation of the second imaging representations comprising a range of wavelengths outside of a visible light spectrum, a remaining imaging representation of the second imaging representations comprising a range of wavelengths within the visible light spectrum ([0089]: Once a wireframe map of the surgical field 115 is obtained, anatomical reference points may be identified by image analysis or may be assigned by an operator, and a solid model may be oriented and aligned so that the anatomical features not visible beneath the exterior view of the surgical field 115 may be matched and projected in the virtual view displayed in the eyepiece. The solid model data may be acquired from computerized tomography (CT) scans, magnetic resonance imaging (MRI), or other scans already of record in the patient's digital chart. In a general computational approach, thin slices may be merged to generate a solid model, and the model may be digitally segmented to isolate individual anatomical structures such as bones, vessels, organs and the like. CT or MRI are images comprising a range of wavelengths outside of a visible light);
overlay the first imaging representation and the at least one second imaging representation ([0122]-[0124]: FIG. 6 shows an example process flow 600 for generating a three-dimensional virtual fusion view by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. … In Step 3, the wireframe model and a reference 3D solid model (such as from a CT scan) may then be processed by data fusion processing as known in the art to produce a virtual solid model, termed here a “fusion 3D model” extending from the surface of the surgical field to any internal structures observable in the CT scan, as correctly registered according to the body position and observable anatomical features that were captured in the earlier step …).
PNG
media_image2.png
604
422
media_image2.png
Greyscale
Esterberg discloses fusion had been known to a PHOSITA before the effective filing date of the claimed invention ([0124]: In Step 3, the wireframe model and a reference 3D solid model (such as from a CT scan) may then be processed by data fusion processing as known in the art to produce a virtual solid model, termed here a “fusion 3D model” extending from the surface of the surgical field to any internal structures observable in the CT scan, as correctly registered according to the body position and observable anatomical features that were captured in the earlier step). But fails to explicitly disclose perform at least one of translation, rotation, scaling, adjustment, and distortion to the at least one second imaging representation via the coordinate system to overlay the first and second images.
However Valadez disclose through translation, rotation, scaling, and other adjustments to fuse two images ([0057]: the system 100 rigidly registers the intra-operative image with the reference image to generate a registered intra-operative image. The rigid registration may be performed by, for instance, a linear transformation (e.g., rotation, scaling, translation, or other affine transformation) of the intra-operative image to align it with the target region in the pre-operative image). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Valdez into that of Esterberg and to include the limitation of perform at least one of translation, rotation, scaling, adjustment, and distortion to the at least one second imaging representation via the coordinate system; and overlay the first imaging representation and the at least one second imaging representation that is performed with at least one of translation, rotation, scaling, adjustment, and distortion with each other, thereby generating a third imaging representation for the surgical site, the third imaging representation comprising a range of wavelengths within the visible light spectrum in order to fuse the two imaging representation using the known technology.
Regarding Claim 30, Claim 30 is/are similar to the combination of Claims 20 and 23. Therefore the same reason(s) for rejections for Claims 20 and 23 is/are also applied to Claim 30.
Claims 21 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Esterberg (2017/0027651 A1) in view of Valadez et al. (US 2014/0003700 A1) as applied to Claims 20 and 30 above, and further in view of Liu et al. (US 2018/0270474 A1).
Regarding Claim 21/31, Esterberg modified by Valadez fails to disclose receive at least one of the first imaging representation and the at least one second imaging representation from an imaging source.
However Liu, in the same field of endeavor, discloses receive at least one of the first imaging representation and the at least one second imaging representation from an imaging source (Claim 19: wherein the imaging or sensing instrument is configured to perform optical spectroscopies, absorption spectroscopy, fluorescence spectroscopy, Raman spectroscopy, coherent anti-Stokes Raman spectroscopy (CARS), surface-enhanced Raman spectroscopy, Fourier transform spectroscopy, Fourier transform infrared spectroscopy (FTIR), multiplex or frequency-modulated spectroscopy, X-ray spectroscopy, attenuated total reflectance spectroscopy, electron paramagnetic spectroscopy, electron spectroscopy, gamma-ray spectroscopy, acoustic resonance spectroscopy, auger spectroscopy, cavity ring down spectroscopy, circular dichroism spectroscopy, cold vapour atomic fluorescence spectroscopy, correlation spectroscopy, deep-level transient spectroscopy, dual polarization interferometry, EPR spectroscopy, force spectroscopy, Hadron spectroscopy, Baryon spectroscopy, meson spectroscopy, inelastic electron tunneling spectroscopy (IETS), laser-induced breakdown spectroscopy (LIBS), mass spectroscopy, Mossbauer spectroscopy, neutron spin echo spectroscopy, photoacoustic spectroscopy, photoemission spectroscopy, photothermal spectroscopy, pump-probe spectroscopy, Raman optical activity spectroscopy, saturated spectroscopy, scanning tunneling spectroscopy, spectrophotometery, ultraviolet photoelectron spectroscopy (UPS), video spectroscopy, vibrational circular dichroism spectroscopy, X-ray photoelectron spectroscopy (XPS), color microscopy, reflectance microscopy, fluorescence microscopy, oxygen-saturation microscopy, polarization microscopy, infrared microscopy, interference microscopy phase contrast microscopy, differential interference contrast microscopy, hyperspectral microscopy, total internal reflection fluorescence microscopy, confocal microscopy, non-linear microscopy, 2-photon microscopy, second-harmonic generation microscopy, super-resolution microscopy, photoacoustic microscopy, structured light microscopy, 4Pi microscopy, stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, ultrasound microscopy, reflectance imaging, fluorescence imaging, Cerenkov imaging, polarization imaging, ultrasound imaging, radiometric imaging, oxygen saturation imaging, optical coherence tomography, infrared imaging, thermal imaging, photoacoustic imaging, spectroscopic imaging, hyper-spectral imaging, fluoroscopy, gamma imaging, X-ray computed tomography, or combinations thereof).
Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Liu into that of Esterberg as modified and to include the limitation of receive at least one of the first imaging representation and the at least one second imaging representation from an imaging source in order to meet the need for an imaging system that utilizes 3D scanning to provide surface topography images as suggested by Liu ([0006]).
Claims 25-29 are rejected under 35 U.S.C. 103 as being unpatentable over Esterberg (2017/0027651 A1) in view of Valadez et al. (US 2014/0003700 A1) and Liu et al. (US 2018/0270474 A1).
Regarding Claim 25, Esterberg disclose a surgical hub for use with a surgical system in a surgical procedure performed in an operating room ([0015]: In at least one embodiment, a method of surgical navigation may include receiving an external three-dimensional model of a surgical site from the viewpoint of a headset, wherein the external three-dimensional model is derived from reflected light), the surgical hub comprising:
receive a first imaging representation for a surgical site, the first imaging representation comprising a coordinate system (see Fig.8A/B: the images captured by CCD camera A/B are interpreted as the first imaging representation. Also see [0134]: FIG. 8B shows an example data transformation by triangulation to generate an array of polar datapoints having depth and position. Structured Illumination has the advantage that a polar coordinate system and zero reference point are inherent in the imaging projector and can be used to speed analysis of the remaining mesh map. Data analysis is illustrated schematically in FIG. 8B and generates a dataset having position coordinates and elevation coordinates that can be used to draw a wireframe model and describe a body volume that can then be supplemented with data from one or more internal three-dimensional solid model scans);
PNG
media_image1.png
588
426
media_image1.png
Greyscale
receive at least one the at least one ([0089]: Once a wireframe map of the surgical field 115 is obtained, anatomical reference points may be identified by image analysis or may be assigned by an operator, and a solid model may be oriented and aligned so that the anatomical features not visible beneath the exterior view of the surgical field 115 may be matched and projected in the virtual view displayed in the eyepiece. The solid model data may be acquired from computerized tomography (CT) scans, magnetic resonance imaging (MRI), or other scans already of record in the patient's digital chart. In a general computational approach, thin slices may be merged to generate a solid model, and the model may be digitally segmented to isolate individual anatomical structures such as bones, vessels, organs and the like. CT or MRI are images comprising a range of wavelengths outside of a visible light);
overlay the first imaging representation, the at least one imaging representation, ([0122]-[0124]: FIG. 6 shows an example process flow 600 for generating a three-dimensional virtual fusion view by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. … In Step 3, the wireframe model and a reference 3D solid model (such as from a CT scan) may then be processed by data fusion processing as known in the art to produce a virtual solid model, termed here a “fusion 3D model” extending from the surface of the surgical field to any internal structures observable in the CT scan, as correctly registered according to the body position and observable anatomical features that were captured in the earlier step …).
PNG
media_image2.png
604
422
media_image2.png
Greyscale
Esterberg discloses fusion had been known to a PHOSITA before the effective filing date of the claimed invention ([0124]: In Step 3, the wireframe model and a reference 3D solid model (such as from a CT scan) may then be processed by data fusion processing as known in the art to produce a virtual solid model, termed here a “fusion 3D model” extending from the surface of the surgical field to any internal structures observable in the CT scan, as correctly registered according to the body position and observable anatomical features that were captured in the earlier step).
But fails to explicitly disclose perform at least one of translation, rotation, scaling, adjustment, and distortion to the at least one second imaging representation via the coordinate system to overlay the first and second images.
However Valadez disclose through translation, rotation, scaling, and other adjustments to fuse two images ([0057]: the system 100 rigidly registers the intra-operative image with the reference image to generate a registered intra-operative image. The rigid registration may be performed by, for instance, a linear transformation (e.g., rotation, scaling, translation, or other affine transformation) of the intra-operative image to align it with the target region in the pre-operative image). Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Valdez into that of Esterberg and to include the limitation of perform at least one of translation, rotation, scaling, adjustment, and distortion to the at least one second imaging representation via the coordinate system in order to fuse the two imaging representation using the known technology.
Esterberg modified by Valadez fails to disclose the at least second imaging representations for the surgical site are a plurality of second imaging representations each for the surgical site, at least one imaging representation of the second imaging representations comprising a range of wavelengths outside of a visible light spectrum, a remaining imaging representation of the second imaging representations comprising a range of wavelengths within the visible light spectrum.
However Liu, in the same field of endeavor, discloses receiving a plurality of second imaging representations each for the surgical site, at least one imaging representation of the second imaging representations comprising a range of wavelengths outside of a visible light spectrum, a remaining imaging representation of the second imaging representations comprising a range of wavelengths within the visible light spectrum (Claim 19: wherein the imaging or sensing instrument is configured to perform optical spectroscopies, absorption spectroscopy, fluorescence spectroscopy, Raman spectroscopy, coherent anti-Stokes Raman spectroscopy (CARS), surface-enhanced Raman spectroscopy, Fourier transform spectroscopy, Fourier transform infrared spectroscopy (FTIR), multiplex or frequency-modulated spectroscopy, X-ray spectroscopy, attenuated total reflectance spectroscopy, electron paramagnetic spectroscopy, electron spectroscopy, gamma-ray spectroscopy, acoustic resonance spectroscopy, auger spectroscopy, cavity ring down spectroscopy, circular dichroism spectroscopy, cold vapour atomic fluorescence spectroscopy, correlation spectroscopy, deep-level transient spectroscopy, dual polarization interferometry, EPR spectroscopy, force spectroscopy, Hadron spectroscopy, Baryon spectroscopy, meson spectroscopy, inelastic electron tunneling spectroscopy (IETS), laser-induced breakdown spectroscopy (LIBS), mass spectroscopy, Mossbauer spectroscopy, neutron spin echo spectroscopy, photoacoustic spectroscopy, photoemission spectroscopy, photothermal spectroscopy, pump-probe spectroscopy, Raman optical activity spectroscopy, saturated spectroscopy, scanning tunneling spectroscopy, spectrophotometery, ultraviolet photoelectron spectroscopy (UPS), video spectroscopy, vibrational circular dichroism spectroscopy, X-ray photoelectron spectroscopy (XPS), color microscopy, reflectance microscopy, fluorescence microscopy, oxygen-saturation microscopy, polarization microscopy, infrared microscopy, interference microscopy phase contrast microscopy, differential interference contrast microscopy, hyperspectral microscopy, total internal reflection fluorescence microscopy, confocal microscopy, non-linear microscopy, 2-photon microscopy, second-harmonic generation microscopy, super-resolution microscopy, photoacoustic microscopy, structured light microscopy, 4Pi microscopy, stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, ultrasound microscopy, reflectance imaging, fluorescence imaging, Cerenkov imaging, polarization imaging, ultrasound imaging, radiometric imaging, oxygen saturation imaging, optical coherence tomography, infrared imaging, thermal imaging, photoacoustic imaging, spectroscopic imaging, hyper-spectral imaging, fluoroscopy, gamma imaging, X-ray computed tomography, or combinations thereof. Notice the imaging system is able to receive and process both visible and invisible images).
Therefore it would have been obvious to a PHOSITA before the effective filing date to incorporate the teaching of Liu into that of Esterberg as modified and to include the limitation of receiving a plurality of second imaging representations each for the surgical site, at least one imaging representation of the second imaging representations comprising a range of wavelengths outside of a visible light spectrum, a remaining imaging representation of the second imaging representations comprising a range of wavelengths within the visible light spectrum and overlay the first imaging representation, the at least one imaging representation, and the remaining imaging representation with each other, thereby generating a third imaging representation for the surgical site, the third imaging representation comprising a range of wavelengths within the visible light spectrum in order to meet the need for an imaging system that utilizes 3D scanning to provide surface topography images as suggested by Liu ([0006]).
Regarding Claim 26, Liu further discloses receive at least one of the first imaging representation and the plurality of second imaging representations from an imaging source (Claim 19). The same reason to combine as that of Claim 25 is applied.
Regarding Claim 27, Esterberg as modified further teaches or suggests wherein the control circuit is further configured to: overlay the first imaging representation and the at least one second imaging representation that is performed with at least one of translation, rotation, scaling, adjustment, and distortion with each other in real-time (Esterberg [0159]: Data may be streamed to the eyepiece for instant reference and may be updated in real time as the surgeon indicates relevant features of interest. Liu [0257]: Still another advantage of the present invention is that the imaging system allows capturing and projecting of information onto a non-even (i.e. uneven, curved, contoured) surface in real-time or near real-time with low spatial latencies).
Regarding Claim 28, Esterberg further discloses visualize at least one portion of the surgical site that is occluded with respect to a field of view of an operator via the third imaging representation ([0102]: FIG. 4 shows an example headset view 400 of a surgical field by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. The headset view 400 shows an incision 405 and a virtual CT model 410 of underlying boney structures in anatomical registration).
PNG
media_image3.png
477
438
media_image3.png
Greyscale
Regarding Claim 29, Esterberg further discloses visualize at least one portion of the surgical site that is occluded with respect to a field of view of an operator during the surgical procedure via the third imaging representation ([0102]: FIG. 4 shows an example headset view 400 of a surgical field by which at least aspects of surgical navigation may be implemented, arranged in accordance with at least some embodiments described herein. The headset view 400 shows an incision 405 and a virtual CT model 410 of underlying boney structures in anatomical registration).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUN HE whose telephone number is (571)270-7218. The examiner can normally be reached M-F 8:00-5:00 MT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao M Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YINGCHUN HE/Primary Examiner, Art Unit 2613