DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This action is in reply to the reply filed December 2, 2025 (hereinafter “Reply”).
Claims 8-16 have been withdrawn.
Claims 1 and 4 are amended.
Claims 17-20 are cancelled.
Claims 21-24 are new.
Claims 1-7 and 21-24 are pending and have been examined.
Claim Rejections - 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-4, 7, and 21-24 are rejected under AIA 35 U.S.C. § 103 as being unpatentable over Fan et al. (WO 2020/123724 A1) (hereinafter “Fan”) in view of Poltarestskyi et al. (U.S. Pub. No. 2019/0380792 A1) (hereinafter “Poltarestskyi”).
Claim 1: Fan, as shown, discloses the following limitations:
a memory (see at least ¶¶ [0110]-[0121]); and
a processor configured to execute machine-readable instructions stored by the memory (see at least ¶¶ [0110]-[0121]), which when executed by the processor, cause the processor to:
receive a first dataset comprising one or more gynecological tumor features (see at least ¶ [0006]: the at least one patient health metric value comprises at least one variable selected from the group consisting of demographic variables, diabetic foot ulcer history variables, compliance variables, endocrine variables, cardiovascular variables, musculoskeletal variables, nutrition variables, infectious disease variables, renal variables, obstetrics or gynecology variables, drug use variables, other disease variables, or laboratory values; see also at least ¶ [0058]: the multispectral image system acquires images from a wide area of tissue, e.g., 5.9 x 7.9 inches, within 6 seconds or less and said multispectral image system ouputs tissue analysis information, such as identification of a plurality of burn states, wound states, healing potential, a clinical characteristic including a cancerous or non-cancerous state of the imaged tissue, wound depth, wound volume, a margin for debridement, or the presence of a diabetic, non-diabetic, or chronic ulcer in the absence of imaging contrast agents);
identify spectral and spatial features from the one or more gynecological tumor features from the first dataset (see at least ¶ [0030]: Figure 16 graphically depicts a workflow for performing pixel-wise classification on multispectral image data, for example image data captured using the process of Figure 13, processed according to Figures 14 and 15, and/or using the multispectral multi-aperture imaging systems of Figures 3A-10B; see also at least ¶ [0058]: techniques for implementing spectral unmixing and image registration to generate a spectral datacube using image information received from such imaging systems; see also at least ¶ [0115]: The datacube generation module 1140 includes instructions that configure the processor 1120 to generate a multispectral datacube based on intensity signals received from the photodiodes of different sensor regions; see also at least ¶ [0116]);
train a machine learning model using the identified spectral and spatial features (see at least ¶ [0116]: some implementations of the datacube analysis module 1145 can provide the multispectral datacube (and optionally depth information) to a machine learning model trained to classify each pixel according to a certain state. These states may be clinical states in the case of tissue imaging, for example burn states (e.g., first degree burn, second degree burn, third degree burn, or healthy tissue categories), wound states (e.g., hemostasis, inflammation, proliferation, remodeling or healthy skin categories), healing potential (e.g., a score reflecting the likelihood that the tissue will heal from a wounded state, with or without a particular therapy), perfusion states, cancerous states, or other wound-related tissue states. The datacube analysis module 1145 can also analyze the multispectral datacube for biometric recognition and/or materials analysis; see also at least ¶¶ [0115] and [0138]-[0139]), wherein the training comprises:
performing a multi-class segmentation process based on the identified spectral and spatial features to produce a set of multi-class segmentation results (see at least ¶ [0058]: said multispectral image system is also configured to provide tissue analysis information, such as identification of a plurality of burn states, wound states, ulcer states, healing potential, a clinical characteristic including a cancerous or non-cancerous state of the imaged tissue, wound depth, wound volume, a margin for debridement, or the presence of a diabetic, non-diabetic, or chronic ulcer in the absence of imaging contrast agents. Similarly, in some of the methods described herein, the multispectral image system acquires images from a wide area of tissue, e.g., 5.9 x 7.9 inches, within 6 seconds or less and said multispectral image system ouputs tissue analysis information, such as identification of a plurality of burn states, wound states, healing potential, a clinical characteristic including a cancerous or non-cancerous state of the imaged tissue, wound depth, wound volume, a margin for debridement, or the presence of a diabetic, non-diabetic, or chronic ulcer in the absence of imaging contrast agents; see also at least ¶¶ [0007]-[0008], [0040], and [0115]-[0116]), and
classifying the identified spectral and spatial features by comparing the multi-class segmentation results with a ground-truth classification (see at least ¶ [0222]: based on a set of ground truth images, a convolutional neural network (CNN) can be used for the automated segmentation of these tissue categories. In some embodiments, the algorithm structure can be a shallow U-net with a plurality of convolutional layers. In one example implementation, desirable segmentation outcomes were achieved with 31 convolutional layers. However, many other algorithms for image segmentation could be applied to achieve the desired output; see also at least ¶ [0224]: results from the U-net algorithm for each image in the validation set were compared to their corresponding ground truth mask. This comparison was done on a pixel-by- pixel basis; see also at least ¶ [0223]);
validate the machine learning model using a second dataset (see at least ¶ [0223]: the DFU image database was randomly split into three sets such that 269 training set images were used for algorithm training, 40 test set images for hyperparameter selection, and 40 validation set images for validation. The algorithm was trained with gradient descent and the accuracy of the test set images was monitored. The algorithm training was stopped when the test set accuracy was maximized. The results of this algorithm were then determined using the validation; see also at least ¶¶ [0196], [0213], [0225], and [0229]); and
optimize the machine learning model by modifying the machine learning model using a third dataset (see at least ¶ [0228]: algorithm training may be conducted over a plurality of epochs, and an intermediate number of epochs may be determined at which accuracy is optimized. In the example implementation described herein, algorithm training for image segmentation was conducted over 80 epochs. As training was monitored, it was determined that epoch 73 achieved the best accuracy for test dataset; see also at least ¶¶ [0108] and [0218]), wherein the machine learning model is optimized to:
extract one or more structures of an anatomical area of interest from one or more scans (see at least ¶ [0211]: Filters can be applied to the raw image by convolution. From the 512 images that result from these filter convolutions, a single 3D matrix may be constructed with dimensions 512 channels x 1044 pixels x 1408 pixels. Additional features may then be computed from this 3D matrix. For example, in some embodiments the mean, median, and standard deviation of the intensity values of the 3D matrix may be computed as further features for input into the machine learning algorithm. [0212] In addition to the six features described above (e. g. , mean, median, and standard deviation of pixel values of the raw image and of the 3D matrix constructed from the application of convolutional filters to the raw image), additional features and/or linear or non-linear combinations of such features may further be included as desired. For example, the product or the ratio of two features could be used as new input features to the algorithm. In one example, the product of a mean and a median may be used as an additional in put feature; see also at least ¶¶ [0007]-[0008], [0030], [0040], [0058], and [0115]-[0116]).
Fan does not explicitly disclose, but Poltarestskyi, as shown, teaches the following limitations:
generate a three-dimensional (3D) rendering of the one or more structures and superimpose the 3D rendering of the one or more structures on the one or more scans (see at least ¶ [0235]: referring again FIG. 10, the Planning page of UI 522 also may provide images of the 3D virtual bone model 1008 and the 3D model of the implant components 1010 along with navigation bar 1012 for manipulating 3D virtual models 1008, 1010. For example, selection or de-selection of the icons on navigation bar 1012 allow the user to selectively view different portions of 3D virtual bone model 1008 with or without the various implant components 1010. For example, the scapula of virtual bone model 1008 and the glenoid implant of implant model 1010 have been de-selected, leaving only the humerus bone and the humeral implant components visible. Other icons can allow the user to zoom in or out, and the user also can rotate and re-orient 3D virtual models 1008, 1010, e.g., using gaze detection, hand gestures and/or voice commands; see also at least ¶ [0736]: if the expert is a VR participant, the expert may view images from one or more of the MR participants in order to view real time images of the patient's bone structure relative to a 3D model that is superimposed on the patient's bone structure. The expert participant may use commands, hand gestures, gaze or other control mechanisms to orient the 3D model relative to the patient's bone structure shown to the remote physician as VR elements. The use of VR to accommodate a remote participant may allow for a more qualified physician to perform the initialization stage. Alternatively, the expert may be a local MR participant, in which case it may still be advantageous to assign an initialization process of a registration process to that expert. Then, after initialization, one of the MR or VR users may initiate an optimization algorithm, such as a minimization algorithm to more precisely match the 3D model with real bone of the patient. The ability to involve a remote expert to a surgical procedure may be especially helpful for complex multi-step surgical procedures, such as a shoulder arthroplasty, an ankle arthroplasty, or any other type of orthopedic surgery that requires one or more complex steps; see also at least ¶¶ [0321] and [0740]); and
manipulate the superimposed 3D rendering based on a received voice command (see at least ¶¶ [0235], [0321], [0736], and [0740] and the analysis above; see also at least ¶ [0337]: as shown in FIG. 13, Augment Surgery widget 1300 may permit a user to select, e.g., with voice command keywords, whether the scapula is shown or not (Scapula ON/OFF) and, if shown, whether the scapula is shown as opaque or transparent (Scapula Opaque/Transparent). In addition, the user may select, e.g., with voice command keywords, whether a glenoid reaming axis is shown or not (Reaming Axis ON/OFF), whether everything is not shown (Everything Off), whether to rotate the displayed virtual objects to the left or to the right (Rotation Left/Right), and whether to STOP the rotation (Say STOP to Freeze)).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the augmented reality displaying and interfacing techniques taught by Poltarestskyi with the systems for assessment, healing prediction, and treatment disclosed by Fan, because Poltarestskyi teaches at ¶ [0736] that “The use of VR to accommodate a remote participant may allow for a more qualified physician to perform the initialization stage” and at ¶ [0003] that its techniques “can assist surgeons with the design and/or selection of surgical guides and implants that closely match the patient’s anatomy and can improve surgical outcomes by customizing a surgical plan for each patient.” See M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the augmented reality displaying and interfacing techniques taught by Poltarestskyi with the systems for assessment, healing prediction, and treatment disclosed by Fan, because the claimed invention is merely a combination of old elements (the augmented reality displaying and interfacing techniques taught by Poltarestskyi and the systems for assessment, healing prediction, and treatment disclosed by Fan), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Claim 2: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above. Further, Fan, as shown, discloses the following limitations:
wherein the first, second, and third datasets comprise a magnetic resonant imaging (MRI) dataset, a 3D MRI dataset, an ultrasound/sonogram dataset, a computed tomography (CT) dataset, or a doppler dataset, and subjects’ metadata (see at least ¶ [0055]: physiologic measurement devices have been used to attempt to diagnose the healing potential of a DFU, such as transcutaneous oxygen measurement, laser Doppler imaging, and indocyanine green videoangiograph; see also at least ¶ [0063]: Figure 2B illustrates examples of how certain scanning spectral imaging technologies generate the datacube 120. Specifically, Figure 2B illustrates the portions 132, 134, and 136 of the datacube 120 that can be collected during a single detector integration period. A point scanning spectrometer, for example, can capture a portion 132 that extends across all spectral planes 2 at a single (x, y ) spatial position. A point scanning spectrometer can be used to build the datacube 120 by performing a number of integrations corresponding to each (x, y) position across the spatial dimensions. A filter wheel imaging system, for example, can capture a portion 134 that extends across the entirety of both spatial dimensions x and y, but only a single spectral plane 2. A wavelength scanning imaging system, such as a filter wheel imaging system, can be used to build the datacube 120 by performing a number of integrations corresponding to the number of spectral planes 2. A line scanning spectrometer, for example, can capture a portion 136 that extends across all spectral dimensions 2 and all of one of the spatial dimension (x or ), but only a single point along the other spatial dimension {y or x). A line scanning spectrometer can be used to build the datacube 120 by performing a number of integrations corresponding to each position of this other spatial dimension {y or x); see also at least ¶¶ [0062], [0064], [0142], and [0153]), and
the spectral and spatial features include shapes and locations of the gynecological tumor features (see at least ¶ [0125]: there may be no common waveband passed to all sensor regions, as it can safely be assumed that there is no change in the shape or positioning of the object relative to the exposures 1205, 1210 and, thus previously computed disparity values can be used to register the NIR channels; see also at least ¶ [0126]: multiple exposures can be captured sequentially to generate PPG data representing the change in shape of a tissue site due to pulsatile blood flow. These PPG exposures may be captured at a non-visible wavelength in some implementations. Although the combination of PPG data with multispectral data may increase the accuracy of certain medical imaging analyses, the capture of PPG data can also introduce additional time into the image capture process. This additional time can introduce errors due to movement of the handheld imager and/or object, in some implementations. Thus, certain implementations may omit capture of PPG data; see also at least ¶¶ [0064], [0081], [0115]-[0116], [0134], and [0180]).
Claim 3: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above. Further, Fan, as shown, discloses the following limitations:
wherein the ground-truth classification includes pixel-level annotations or class-level annotations (see at least ¶ [0224]: results from the U-net algorithm for each image in the validation set were compared to their corresponding ground truth mask. This comparison was done on a pixel-by-pixel basis. Within each of the three tissue types this comparison was summarized using the following categories. A True Positive (TP) category included the total number of pixels for which the tissue type of interest was present at a pixel in the ground truth mask, and the model predicted the tissue type was present at this pixel. A True Negative (TN) category included the total number of pixels for which the tissue type of interest was not present at a pixel in the ground truth mask, and the model predicted the tissue type was not present at this pixel. A False Positive (FP) category included he total number of pixels for which the tissue type of interest was not present at a pixel in the ground truth mask, and the model predicted the tissue type was present at this pixel. A False Negative (FN) category included the total number of pixels for which the tissue type of interest was present at a pixel in the ground truth mask, and the model predicted the tissue type was not present at this pixel).
Claim 4: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above. Further, Fan, as shown, discloses the following limitations:
wherein performing the multi- class segmentation comprises:
using area-based indexes to compare the multi-class segmentation results with the ground-truth classification, or using distance-based indexes to further evaluate the multi-class segmentation in terms of location and shape accuracy of extracted region boundaries from the identified spectral and spatial features (see at least ¶ [0131]: the outputs of both non-linear mapping modules 1310A, 1310B are then provided to the depth calculation module 1335, which can compute a depth of a particular region of interest in the image data. For example, the depth may represent the distance between the object and the image sensor. In some implementations, multiple depth values can be computed and compared to determine the depth of the object relative to something other than the image sensor. For example, a greatest depth of a wound bed can be determined, as well as a depth (greatest, lowest, or average) of healthy tissue surrounding the wound bed. By subtracting the depth of the healthy tissue from the depth of the wound bed, the deepest depth of the wound can be determined. This depth comparison can additionally be performed at other points in the wound bed (e.g., all or some predetermined sampling) in order to build a 3D map of the depth of the wound at various points (shown in Figure 14 as z(x,y) where z would be a depth value). In some embodiments, greater disparity may improve the depth calculation, although greater disparity may also result in more computationally intensive algorithms for such depth calculation; see also at least ¶¶ [0081], [0108], [0142], [0154], [0161], [0184], and [0224]).
Claim 7: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above. Further, Fan, as shown, discloses the following limitations:
wherein the machine learning model comprises a deep learning model comprising a neural network selected from a group of neural networks consisting of: convolution neural network (CNN), Fully Convolutional Network (FCN), Global Convolutional Network (GCN) with Deep Multiple Atrous Convolutions (DMAC), Encoder-Decoder global convolutional network (HIFUJNet), U-Net, HRNet, and CE-Net (see at least ¶ [0008]: In some embodiments, the one or more processors are further configured to automatically segment the second subset of the plurality of pixels into at least two categories of non-wound pixels, the at least two categories selected from the group consisting of callus pixels, normal skin pixels, and background pixels. In some embodiments, the machine learning algorithm comprises a convolutional neural network. In some embodiments, the machine learning algorithm is at least one of a U-Net comprising a plurality of convolutional layers and a SegNet comprising a plurality of convolutional layers. In some embodiments, the machine learning algorithm is trained based on a dataset comprising a plurality of segmented images of wounds, ulcers, or burns. In some embodiments, the wound is a diabetic foot ulcer. In some embodiments, the one or more processors are further configured to output a visual representation of the segmented plurality of pixels for display to a user. In some embodiments, the visual representation comprises the image having each pixel displayed with a particular visual representation selected based on the segmentation of the pixel, wherein wound pixels and non-wound pixels are displayed in different visual representations; see also at least ¶¶ [0006], [0108], [0139], [0141], and [0160]).
Claim 21: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above.
Fan does not explicitly disclose, but Poltarestskyi, as shown, teaches the following limitations:
wherein the superimposed 3D rendering comprises one or more superimposed cross-sectional images (see at least ¶ [0250]: returning to the example of FIG. 10, the Planning page presented by visualization device 213 also includes multi-planar image viewer 1014 (e.g., a DICOM viewer) and navigation bar 1016 that allow the user to view patient image data and to switch between displayed slices and orientations. For example, the user can select 2D Planes icons 1026 on navigation bar 1016 so that the user can view the 2D sagittal and coronal planes of the patient's body in multi-planar image viewer 1014),
wherein the voice command includes a scroll command (see at least ¶ [0215]: microphones 606, and associated speech recognition processing circuitry or software, may recognize voice commands spoken by the user and, in response, perform any of a variety of operations, such as selection, activation, or deactivation of various functions associated with surgical planning, intra-operative guidance, or the like; see also at least ¶ [0257]: selection of items on menu 1604 can remove features from the 3D images or add other parameters of the surgical plan, such as a reaming axis 1606, e.g., by voice commands, gaze direction and/or hand gesture selection. Placement of guide 1600 may be unnecessary for procedures in which visualization device 213 presents a virtual reaming axis or other virtual guidance, instead of a physical guide, to guide a drill for placement of a reaming guide pin in the glenoid bone. The virtual guidance or other virtual objects presented by visualization device 213 may include, for example, one or more 3D virtual objects. In some examples, the virtual guidance may include 2D virtual objects. In some examples, the virtual guidance may include a combination of 3D and 2D virtual objects), and
wherein to manipulate the superimposed 3D rendering based on the received voice command includes to scroll from a first superimposed cross-sectional image of the one or more superimposed cross-sectional images to a second superimposed cross-sectional image of the one more superimposed cross-sectional images (see at least ¶¶ [0215], [0250], and [0257] and the analysis above).
The rationales to modify/combine the teachings of Fan to include the teachings of Poltarestskyi are presented above regarding claim 1 and incorporated herein.
Claim 22: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above.
Fan does not explicitly disclose, but Poltarestskyi, as shown, teaches the following limitations:
wherein the received voice command includes a remove command (see at least ¶¶ [0235], [0321], [0736], and [0740] and the analysis above; see also at least ¶ [0337]: as shown in FIG. 13, Augment Surgery widget 1300 may permit a user to select, e.g., with voice command keywords, whether the scapula is shown or not (Scapula ON/OFF) and, if shown, whether the scapula is shown as opaque or transparent (Scapula Opaque/Transparent). In addition, the user may select, e.g., with voice command keywords, whether a glenoid reaming axis is shown or not (Reaming Axis ON/OFF), whether everything is not shown (Everything Off), whether to rotate the displayed virtual objects to the left or to the right (Rotation Left/Right), and whether to STOP the rotation (Say STOP to Freeze); see also at least ¶¶ [0240] and [0776]), and
wherein to manipulate the superimposed 3D rendering based on the received voice command includes to:
remove a first structure of the one or more structures from the 3D rendering to generate an updated 3D rendering (see at least ¶ [0337] and the analysis above; see also at least ¶¶ [0235], [0321], [0736], and [0740]; see also at least ¶ [0240]: To further aid the user, MR system 212 may rotate the 3D models, walk around the 3D models, hide or show parts of the 3D models, or perform other actions to observe the 3D models; see also at least ¶ [0776]); and
superimpose the updated 3D rendering on the one or more scans (see also at least ¶¶ [0235], [0240], [0321], [0337], [0736], [0740], and [0776] and the analysis above).
The rationales to modify/combine the teachings of Fan to include the teachings of Poltarestskyi are presented above regarding claim 1 and incorporated herein.
Claim 23: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above.
Fan does not explicitly disclose, but Poltarestskyi, as shown, teaches the following limitations:
wherein to manipulate the superimposed 3D rendering based on the received voice command includes to rotate the superimposed 3D rendering (see at least ¶¶ [0235], [0321], [0736], and [0740] and the analysis above; see also at least ¶ [0337]: as shown in FIG. 13, Augment Surgery widget 1300 may permit a user to select, e.g., with voice command keywords, whether the scapula is shown or not (Scapula ON/OFF) and, if shown, whether the scapula is shown as opaque or transparent (Scapula Opaque/Transparent). In addition, the user may select, e.g., with voice command keywords, whether a glenoid reaming axis is shown or not (Reaming Axis ON/OFF), whether everything is not shown (Everything Off), whether to rotate the displayed virtual objects to the left or to the right (Rotation Left/Right), and whether to STOP the rotation (Say STOP to Freeze)).
The rationales to modify/combine the teachings of Fan to include the teachings of Poltarestskyi are presented above regarding claim 1 and incorporated herein.
Claim 24: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above.
Fan does not explicitly disclose, but Poltarestskyi, as shown, teaches the following limitations:
wherein to manipulate the superimposed 3D rendering based on the received voice command includes to toggle, on or off, a first structure of the one or more structures (see at least ¶¶ [0235], [0321], [0736], and [0740] and the analysis above; see also at least ¶ [0337]: as shown in FIG. 13, Augment Surgery widget 1300 may permit a user to select, e.g., with voice command keywords, whether the scapula is shown or not (Scapula ON/OFF) and, if shown, whether the scapula is shown as opaque or transparent (Scapula Opaque/Transparent). In addition, the user may select, e.g., with voice command keywords, whether a glenoid reaming axis is shown or not (Reaming Axis ON/OFF), whether everything is not shown (Everything Off), whether to rotate the displayed virtual objects to the left or to the right (Rotation Left/Right), and whether to STOP the rotation (Say STOP to Freeze); see also at least ¶¶ [0240] and [0776]).
The rationales to modify/combine the teachings of Fan to include the teachings of Poltarestskyi are presented above regarding claim 1 and incorporated herein.
Claim 5 is rejected under AIA 35 U.S.C. § 103 as being unpatentable over Fan et al. (WO 2020/123724 A1) (hereinafter “Fan”) in view of Poltarestskyi et al. (U.S. Pub. No. 2019/0380792 A1) (hereinafter “Poltarestskyi”) and further in view of Lin et al. (U.S. Pub. No. 2016/0335770 A1) (hereinafter “Lin”).
Claim 5: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above.
Fan does not explicitly disclose, but Lin, as shown, teaches the following limitations:
wherein the first dataset comprises 3D magnetic resonant images (MRI) of uterine fibroids and the one or more gynecological tumor features comprise uterine fibroid features (see at least ¶ [0032]: the system 100 includes a segmentation module 110 which performs a three-dimensional segmentation of the contrast-enhanced MR images of the fibroid lesion. In one embodiment, a semi-automatic three-dimensional tumor segmentation is performed using a software program such as a software prototype (MEDISYS™, Philips Research, Suresnes, France). The software program may be stored in the memory 116 and configured to be accessed by the segmentation module 110. Alternatively, the software program may be stored on a non-transitory computer readable medium which is accessed by the segmentation module 110 and executed by the processor 104. The software may use non-Euclidean radial basis functions in order to perform the segmentation; see also at least ¶¶ [0004], [0021]-[0022], [0025], and [0035]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the techniques for three-dimensional quantitative evaluation taught by Lin with the systems for assessment, healing prediction, and treatment disclosed by Fan (as modified by Poltarestskyi), because Lin teaches at ¶ [0029] (emphasis added) that “a three-dimensional quantitative evaluation of uterine fibroids is performed in order to accurately evaluate uterine fibroid enhancement in absolute numeric values” and at ¶ [0009] that its techniques address “the need to create instruments that are capable of accurately quantifying the viable tissue within fibroids on intra-and post-procedural imaging.” See M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the techniques for three-dimensional quantitative evaluation taught by Lin with the systems for assessment, healing prediction, and treatment disclosed by Fan (as modified by Poltarestskyi), because the claimed invention is merely a combination of old elements (the techniques for three-dimensional quantitative evaluation taught by Lin, the augmented reality displaying and interfacing techniques taught by Poltarestskyi, and the systems for assessment and the healing prediction and treatment disclosed by Fan), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Claim 6 is rejected under AIA 35 U.S.C. § 103 as being unpatentable over Fan et al. (WO 2020/123724 A1) (hereinafter “Fan”) in view of Poltarestskyi et al. (U.S. Pub. No. 2019/0380792 A1) (hereinafter “Poltarestskyi”) and further in view of Gillies et al. (U.S. Pub. No. 2017/0071496 A1) (hereinafter “Gillies”).
Claim 6: The combination of Fan and Poltarestskyi teaches the limitations as shown in the rejections above.
Fan does not explicitly disclose, but Gillies, as shown, discloses the following limitations:
wherein the first dataset comprises 3D magnetic resonant images (MRI) of ovarian tumors and the one or more gynecological tumor features comprise ovarian cancer features (see at least ¶ [0049]: the tumor of the disclosed methods can be any cell in a subject undergoing unregulated growth, invasion, or metastasis. In some aspects, the cancer can be any neoplasm or tumor for which radiotherapy is currently used. Alternatively, the cancer can be a neoplasm or tumor that is not sufficiently sensitive to radiotherapy using standard methods. Thus, the cancer can be a sarcoma, lymphoma, carcinoma, blastoma, or germ cell tumor. A representative but non-limiting list of cancers that the disclosed compositions can be used to treat include lymphoma, B cell lymphoma, T cell lymphoma, mycosis fungoides, Hodgkin's Disease, myeloid leukemia, bladder cancer, brain cancer, nervous system cancer, head and neck cancer, squamous cell carcinoma of head and neck, kidney cancer, lung cancers such as small cell lung cancer and non-small cell lung cancer, neuroblastoma/glioblastoma, ovarian cancer, pancreatic cancer, prostate cancer, skin cancer, liver cancer, melanoma, squamous cell carcinomas of the mouth, throat, larynx, and lung, colon cancer, cervical cancer, cervical carcinoma, breast cancer, epithelial cancer, renal cancer, genitourinary cancer, pulmonary cancer, esophageal carcinoma, head and neck carcinoma, large bowel cancer, hematopoietic cancers; testicular cancer; colon and rectal cancers, prostatic cancer, and pancreatic cancer. In particular embodiments, the tumor is a glioblastoma multiforme (GBM); see also at least ¶ [0113]).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging techniques taught by Gillies with the systems for assessment, healing prediction, and treatment disclosed by Fan (as modified by Poltarestskyi), because Gillies teaches at ¶ [0089] that it techniques can be used to “gain insight into the evolutionary dynamics within tumors” and “that combinations of sequences from standard MRI imaging can define spatially and physiologically distinct regions or habitats within the ecology of GBMs and that this may be useful as a patient-specific prognostic biomarker.” See M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the imaging techniques taught by Gillies with the systems for assessment, healing prediction, and treatment disclosed by Fan (as modified by Poltarestskyi), because the claimed invention is merely a combination of old elements (the imaging techniques taught by Gillies, the augmented reality displaying and interfacing techniques taught by Poltarestskyi, and the systems for assessment, healing prediction, and treatment disclosed by Fan), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Response to Arguments
The arguments submitted with the Reply have been fully considered. The amendments obviate the rejections under § 101. The remaining arguments are moot in view of the new grounds of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. The following references have been cited to further show the state of the art with respect to healthcare imaging diagnostics.
Stigall (U.S. Pub. No. 2014/0275996 A1) (constructing and image of a body structure); and
Morris et al. (“Diagnostic imaging.” The Lancet 379.9825 (2012): 1525-1533).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher Tokarczyk, whose telephone number is 571-272-9594. The examiner can normally be reached Monday-Thursday between 6:00 AM and 4:00 PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid, can be reached at 571-270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER B TOKARCZYK/ Primary Examiner, Art Unit 3687