DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 6-13, 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Reicher et al (Pub, No.: US 2016/0364630) in view of Razzaque et al (Pub. No.: US 2014/0343404).
Regarding claims 1, 16, Reicher et al disclose a method for providing intraoperative surgery guidance, comprising:
generating, with a processor of a computer system, a 3D model (three-dimensional image) of an area of interest on a patient’s body [see 0151] by disclosing the image 1600 provides a user, such as a radiologist with an image (e.g., a three-dimensional image) of an organ with associated biopsied locations [see 0151];
based on the 3D model of the area of interest, providing a representation of the area of interest displayed on a head-mounted display device worn by a user [see 0042] by disclosing a user may interact with the server 102 through one or more intermediary devices, such as a personal computing device laptop, desktop, tablet, smart phone, smart watch or other wearable [see 0042]
the representation of the area of interest displayed as an overlay over the user’s view of the area of interest through the wearable display device during a surgical operation [see 0042, 0072] by disclosing generate a diagnosis that includes a measurement associated with the image and the measurements may be displayed when the images are displayed (e.g., as an overlay) [see 0072, 0080];
the representation including pathology characteristics associated with the area of interest [see 0153-0156];
detecting, with the processor, a tissue removal (biopsy) and a section location associated with the tissue removal [see 0153-0157];
associating the section location with a pathology result associated with the tissue removal (biopsy) in the 3D model of the area of interest to provide an updated 3D model of the area of interest;
and based on the pathology result, updating an indicator associated with the section location of the tissue removal in the representation of the area of interest displayed on the wearable display device [see 0043] to provide an updated representation of the area of interest [see 0089, 0120, 0147] by disclosing the computer system may be updated based on the score [see 0120];
based on the 3D model of the area of interest, providing a representation of the area of interest displayed on a head-mounted display device worn by a user [see 0042] by disclosing a user may interact with the server 102 through one or more intermediary devices, such as a personal computing device laptop, desktop, tablet, smart phone, smart watch or other wearable [see 0042].
However, Reicher et al don’t explicitly mention the wearable device is head-mounted.
Nonetheless, Razzaque et al disclose a user can wear a head mounted display in order to receive 3D images from the image guidance unit 130 [see 0036].
Therefore, it is obvious to one skilled in the art at the time the invention was filed and would have been motivated to combine Reicher et al and Razzaque et al by using a head mounted display; in order for the user to have both hands free.
Regarding claim 2, Reicher et al disclose wherein the 3D model is converted from one or more previously stored Digital Imaging and Communications in Medicine (““DICOM”) files of the area of interest [see 0053, 0080].
Regarding claim 3, Reicher et al disclose the updated representation of the area of interest includes a plurality of indicators associated with a plurality of tissue removals, and the plurality of indicators comprise color-coding the area of interest [see 0061] by the learning engine 110 may automatically add one or more graphical markers to an image that specify relevant areas of an image for manual review. In some embodiments, these marked images may be included in or associated with a corresponding report, such as a DICOM structured report [see 0081]. Reicher et al disclose the learning engine 110 may add a graphical mark to new images of the lung to mark the biased region of the lung. In some embodiments, the learning engine 110 may vary one or more characteristics of a graphical mark (e.g., color, size, shape, animation, etc.) to indicate a type of the correlation associated with the graphical mark (e.g., a patient correlation, a time correlation, a prior diagnosis correlation, or an anatomical correlation) [see 0061, 0134].
Regarding claim 4, Reicher et al disclose wherein the pathology result comprises a pathology score, and the plurality of indicators are color-coded [see 0134] according to the pathology score associated with each of the plurality of indicators and one or more preset pathology score thresholds [see 0117-121].
Regarding claims 6, 17, Reicher et al and Razzaque et al don’t disclose wherein the section location of the tissue removal is detected via an optical sensor tracking a position of a surgical tool.
Nonetheless, Razzaque et al disclose wherein the section location of the tissue removal is detected via an optical sensor tracking a position of a surgical tool [see 0029-0033].
Therefore, one skilled in the art at the time the invention was filed and would have been motivated to combine Reicher et al, Razzaque et al and by using an optical sensor tracking a position of a surgical tool; to allow for tracking of the position and/or orientation of the tracking unit [see 0033]
Regarding claim 7, Reicher et al and Razzaque et al don’t disclose wherein the optical sensor tracks a position of one or more optical codes on the surgical tool to detect the section location of the tissue removal.
Nonetheless, Razzaque et al disclose tracks a position of one or more optical codes (RFID) on the surgical tool to detect the section location of the tissue removal [see 0033].
Therefore, one skilled in the art at the time the invention was filed and would have been motivated to combine Reicher et al, Razzaque et al and by using an optical sensor tracking a position of a surgical tool; to allow for tracking of the position and/or orientation of the tracking unit [see 0033].
Regarding claim 8, Reicher et al and Razzaque et al don’t disclose wherein the tissue removal is detected via a predefined motion or voice command by the user.
Nonetheless, Lin et al disclose wherein the tissue removal is detected via a predefined motion [see 0032, 0045].
Therefore, one skilled in the art at the time the invention was filed and would have been motivated to combine Reicher et al, Razzaque et al by using a predefined motion; for accuracy purposes.
Regarding claims 9, 18, Reicher et al disclose wherein the pathology result is received from an on-site pathology machine configured to provide the pathology result in less than three minutes [inherently disclosed, emphasis added).
Regarding claims 10, 19, Reicher et al disclose wherein the pathology result comprises one or more pathology scores including:
at least one of a probability of cancer presence associated with the tissue removal [see 0082]
a probability of cancer recurrence associated with the tissue removal,
or a time to cancer recurrence associated with the tissue removal.
Regarding claims 11, 20, Reicher et al disclose wherein the one or more pathology scores are determined through use of a machine learning model which determines the pathology scores based at least on pathology imaging files associated with the tissue removal and demographic and clinical data associated with the patient [see 0115-0120 and 0148].
Regarding claim 12, Reicher et al disclose wherein the machine learning model further bases the one or more pathology scores on the 3D model of the area of interest or the updated 3D model of the area of interest [see 0115-0120 and 0148] by disclosing automatically generating, with the learning engine 110, a score based on a comparison of the diagnosis and the pathology result and the method 1000 may also include displaying, with the learning engine 110, the score within a graphical user interface [see 0117] the learning engine 110 may use a score to determine when to perform an updates [see 0120].
Regarding claim 13, Reicher et al disclose wherein the one or more pathology scores are displayed on the representation of the area of interest during the surgical operation [see 0115, 0117] by disclosing automatically generating, with the learning engine 110, a score based on a comparison of the diagnosis and the pathology result and the method 1000 may also include displaying, with the learning engine 110, the score within a graphical user interface [see 0117] the learning engine 110 may use a score to determine when to perform an updates [see 0120].
Regarding claim 15, Reicher et al disclose converting the updated 3D model of the area of interest to one or more DICOM image files that are geo-tagged with the pathology result and using the DICOM image files to guide postoperative treatment and/or monitoring [see 0081] by disclosing these marked images may be included in or associated with a corresponding report, such as a DICOM structured report [see 0081]
Claim(s) 5 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Reicher et al (Pub, No.: US 2016/0364630) in view of Razzaque et al (Pub. No.: US 2014/0343404) as applied to claim 1 and further in view of Lin et al (Pub. No.: US 2018/0116732).
Regarding claim 5, Reicher et al and Razzaque et al don’t disclose wherein the representation of the area of interest comprises an augmented reality and/or mixed reality display.
Nonetheless, Lin et al disclose the calibrated 3D mixed reality headset includes a head-mounted optical see-through stereoscopic display configured to display 3D content in the surroundings of a user [see 0007-0008, 0022, 0036] by disclosing the calibrated 3D mixed reality headset includes a head-mounted optical see-through stereoscopic display configured to display 3D content in the surroundings of a user [see 0007]
Therefore, one skilled in the art at the time the invention was filed and would have been motivated to combine Reicher et al, Razzaque et al and Lin et al by using an augmented reality and/or mixed reality display; to display 3D content in the surroundings of a user [see 0007].
Claim(s) 14 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Reicher et al (Pub, No.: US 2016/0364630) in view of Razzaque et al (Pub. No.: US 2014/0343404) as applied to claim 1 and further in view of LeBoeuf et al (Pub. No.: US 2021/0375457)
Regarding claim 14, Reicher et al and Razzaque et al don’t disclose wherein the one or more pathology scores are used to identify margins of a tumor being excised.
Nonetheless, wherein the one or more pathology scores are used to identify margins of a tumor being excised [see 0019-0021, 0030-0031]
Therefore, one skilled in the art at the time the invention was filed and would have been motivated to combine Reicher et al, Razzaque et al and LeBoeuf et al by using one or more pathology scores are used to identify margins of a tumor being excised; for accuracy purposes.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOEL F BRUTUS whose telephone number is (571)270-3847. The examiner can normally be reached Mon-Sat, 11:00 AM to 7:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Pascal Bui-Pho can be reached at 571-272-2714. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOEL F BRUTUS/ Primary Examiner, Art Unit 3798