DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested:
IDENTIFYING ANATOMICAL FEATURES OF A SUBJECT USING A SEGMENTATION APPROACH WITH A MACHINE LEARNING MODEL TO IDENTIFY THE AT LEAST ONE ANATOMICAL FEATURE OF INTEREST THAT IS TRAINED USING MAPS AND TRAINING DATASETS.
Claim Objections
Claims 2-16 and 19 are objected to because of the following informalities:
Claim 2, ll. 1: the claim phrase “the method” is suggested to be changed to -- the computer-implemented method--. This same issue is present in claims 3-16 and 19, which are also objected.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-4, 6-9, 15, 17, 18 and 20 is/are rejected under 35 U.S.C. 102(a1 and/or a2) as being anticipated by Lu (US Pub 2019/0223725).
Re claim 1: (Currently Amended) Lu discloses a computer-implemented method, comprising:
receiving imaging data representative of a volume of a subject's anatomy, wherein the received imaging data comprises at least one unidentified anatomical feature of interest (e.g. the system receives image data that consists of a part of the body, such as a heart or an organ. This part of the body contains volume that contains a feature that is not yet identified, which is taught in ¶ [24], [30], [62] and [68].); and
[0024] In one embodiment, the images are of the heart. For cardiac or heart segmentation, images of the hearts of various patients are obtained. Any number of images of each heart may be obtained. In other embodiments, the images are of other parts of the body, such as the torso or head. The object of interest in a medical image may be an organ (e.g., whole heart), part of an organ (e.g., left ventricle or other heart chamber), a cyst, a tumor, calcification, or other anomaly or lesion.
[0030] The training data includes a ground truth indication of the object. The ground truth indication is a segmentation of the object, such as a marker, trace, border, or other segmentation of the left ventricle in each image. For example, a border of the left ventricle, binary mask of heart walls forming the left ventricle, or binary mask of the entire left ventricle may be used as the ground truth. The medical images, such as short or long axis cardiac CINE slices of the heart from cardiac scanning, are physician-annotated to add the ground truth. Alternatively, automatic segmentation is applied to add the ground truth.
[0062] FIG. 5 is a flow chart diagram of one embodiment of object detection, such as heart segmentation with a MR imaging system. FIG. 5 shows a method for object (e.g., left ventricle) detection with a medical imaging system, such as a MR system. The machine-learnt network is applied to detect the object.
[0068] In act 54, the medical imaging system detects the locations in images representing the object. For example, the machine-learnt network is applied to determine locations of the left ventricle in one or more (e.g., all) of the images. Other heart regions may be alternatively or additionally segmented. The object is detected using the hidden features of the deep network. For example, the trained convolution units and LSTM units are applied to the scan data or derived feature values to extract the corresponding features and output the binary mask for the object for each of the input images. The features of the input images are extracted from the images. Other more abstract features may be extracted from those extracted features using the architecture. Depending on the number and/or arrangement of units, other features are extracted from features. Other features based on the temporal patterns of extracted features values are calculated using the LSTM units.
using a machine learning model to implement a segmentation approach to identify the at least one anatomical feature of interest in the received imaging data (e.g. the machine learning model segments parts of an image to identify the aspects, such as the boundary of the object. Moreover, the segmentation allows for identification of a ventricle of an organ, which is taught in ¶ [54]-[69].),
[0054] The neural network is defined to provide a segmentation output. Locations that are members or represent the object are identified. The segmentation may be a binary output per location or may identify a boundary. Where the spatiotemporal input data is slices or 2D planes over time (2D+t or 3D), the segmentation output is a pixel mask for each of the 2D planes. Where the spatiotemporal input data is volumes or 3D frames over time (3D+t or 4D), the segmentation output is a voxel mask for each of the 3D frames or volumes. A binary indication of membership of each pixel or voxel to the object is provided. Other segmentation outputs may be used.
[0055] In one embodiment, a SoftMax layer or unit 29 is the last layer of the neural network architecture. Since the task is binary segmentation, the SoftMax layer 28 is added at the end of the network. The SoftMax layer 28 implements binary cross-entropy loss based on the feature values input, but other functions may be used.
[0056] In act 14 of FIG. 1, a machine (e.g., image processor, workstation, computer or server) trains the neural network arrangement with the training data having ground truth segmentation of the object. The neural network including LSTM is trained using the medical images of the object and the ground truth annotation for the object. Machine learning is performed to train the various units using the defined deep architecture. The features (e.g., convolution kernels, transposed-convolution kernels, max pooling connections, and binary mapping) that are determinative or map to the ground truth segmentation are learned. The features providing the desired result or detection of the object are learned.
[0057] The results relative to the ground truth and the error for reconstruction for the feature learning network are back-projected to learn the features that work best. In one embodiment, a L2-norm loss is used to optimize the network. Other error functions may be used. In one embodiment, the weights of the network are randomly initialized, but another initialization may be used. End-to-end training is performed, but one or more features may be set. Batch normalization, dropout, and data augmentation are not used, but may be. The optimization is with the RMSprop optimizer, but other optimization functions (e.g., Adam, SGD, etc.) may be used. During the optimization, the different distinguishing features are learned. The features providing an indication of location of the object given input medical image sequences are learned.
[0058] In act 16, the machine outputs a trained neural network. The machine-learnt network incorporates the deep learned features for the various units and/or layers of the network. The collection of individual features forms a feature or feature set for distinguishing an object from other objects. The features are provided as nodes of the feature units in different levels of abstraction or compression. The nodes define convolution kernels trained to extract the features.
[0059] Once trained, a matrix, kernels, or other trained network is output. The data represents the trained architecture. The machine-learnt network includes definitions of convolution kernels and/or other characteristics of the neural network trained to detect the object of interest, such as a left ventricle. Alternatively, separate matrices or network representations are used for any of the nodes, units, layers, network, and/or detector.
[0060] The machine-learnt detector is output to a network or memory. For example, the neural network as trained is stored in a memory for transfer and/or later application.
[0061] Using the learned features, the machine-learnt network may detect the object of interest in an input series of medical images for a patient. Once the network is trained, the network may be applied. The network with defined or learnt features is used to extract from input images. The machine-learnt network uses the extracted features from the image to detect the object, such as detecting in the form of a spatial distribution or heatmap of likely locations of the object and/or detecting a full segmentation.
[0062] FIG. 5 is a flow chart diagram of one embodiment of object detection, such as heart segmentation with a MR imaging system. FIG. 5 shows a method for object (e.g., left ventricle) detection with a medical imaging system, such as a MR system. The machine-learnt network is applied to detect the object.
[0063] The same image processor or a different image processor than used for training applies the learnt features and network. For example, the network architecture, learnt weights, and learnt kernels are transmitted from a graphics processing unit used to train to a medical scanner, medical server, or medical workstation. An image processor of the medical device applies the machine-learnt network. For example, the medical imaging system of FIG. 7 is used.
[0064] Additional, different, or fewer acts may be provided. For example, acts for configuring the medical system are provided. The acts are performed in the order shown (top to bottom or numerical), but other orders may be used.
[0065] In act 52, the image processor receives a sequence of images of an object. The images are from a scan of a patient. For example, an MR imaging system scans the heart of a patient (e.g., torso, cardiac or heart scan) over time (e.g., one or more heart cycles). The resulting scan data is received from or by the MR system. The scan data is a set of spatiotemporal MR images.
[0066] The receipt is by scanning the patient. Alternatively, the receipt is by receiving from a network interface. In other embodiments, receipt is by loading from memory.
[0067] The received medical images are to be used to detect the location or locations of the object or objects of interest. The received medical image may be pre-processed, such as normalized in a same way as the training medical images.
[0068] In act 54, the medical imaging system detects the locations in images representing the object. For example, the machine-learnt network is applied to determine locations of the left ventricle in one or more (e.g., all) of the images. Other heart regions may be alternatively or additionally segmented. The object is detected using the hidden features of the deep network. For example, the trained convolution units and LSTM units are applied to the scan data or derived feature values to extract the corresponding features and output the binary mask for the object for each of the input images. The features of the input images are extracted from the images. Other more abstract features may be extracted from those extracted features using the architecture. Depending on the number and/or arrangement of units, other features are extracted from features. Other features based on the temporal patterns of extracted features values are calculated using the LSTM units.
[0069] For application, the scan data is input to the machine-learnt network. In one embodiment, the machine-learnt network is a fully convolutional network, such as a convolutional-to-transposed-convolutional network with a LSTM network or layer. The machine-learnt network may be a U-net encoder-decoder trained for detection of the heart region. Multiple levels of feature compression or abstraction are provided, such as four. The encoder segment has a plurality of convolutional layers with increasing feature compression or abstraction, and the decoder segment has a plurality of transposed-convolutional layers with decreasing feature compression or abstraction.
wherein,
the machine learning model is trained using maps respectively generated for training data sets, wherein at least one of the training data sets comprises training imaging data representative of a volume of a training anatomy (e.g. the training data is representative of a part of a body containing a volume, which is taught in ¶ [56]-[69] above. The machine learning model is trained to generate a heatmap for the training data, which is taught in ¶ [70]-[73].); and
[0070] The LSTM layer may be a convolutional LSTM. The LSTM operates on the level with a greatest feature compression or abstraction (e.g., bottom of the U-net between the encoder output and the decoder input), finding patterns over time for the most abstract feature values. The LSTM extracts global features that capture the temporal changes observed over a time window over at least part of the cardiac cycle.
[0071] In other embodiments, the machine-learnt network includes one or more LSTM layers operating at other levels of compression or abstraxtion, such as at a level of least compression/abstraction and/or intermediate levels of feature compression/abstraction. LSTM layers may operate on each of the levels of compression or abstraction, such as at any skip connections and the bottom connection of U-net between the encoder and the decoder. Spatial-temporal patterns are identified from features extracted from every level of the U-net encoder, by incorporating convolutional LSTM in the skip connections.
[0072] Any time window may be used in application. The same time window used in training is applied. For example, a time window less than (e.g., about ½ or ⅓ of the heart cycle) is used in the LSTM. The same time window is applied to each LSTM layer, but different LSTM units may have different window sizes and/or step sizes.
[0073] The trained neural network is configured by the machine training to output a heatmap or binary mask at a resolution of the medical images or scan data. For example, the neural network outputs a binary mask indication of locations of the left ventricle. Where a heatmap (e.g., probability by location) is output, thresholding or another image process (e.g., SoftMax) is applied to segment based on the heatmap. The machine-learnt neural network includes an output layer, such as a SoftMax layer, to output the binary mask. Other outputs, such as outputting the heatmap as a segmentation, may be used.
wherein the map for the at least one of the training data sets is generated by a spatial function configured to specify a spatial distribution of at least one training region relative to at least one control location in the training data set associated with the map (e.g. in the map or generated area, a spatial distribution of locations where an identified part of the body is located or detecting a full segmentation, which is taught in ¶ [61] and [73] above and [74]-[76]. The location of the body part is relative to a control area of a scanned organ.),
[0074] In act 56, the medical imaging system generates an image with information that is a function of the detected heart region. The image is output. The results or segmented information are output.
[0075] In one embodiment, one or more medical images with annotations showing position or segmentation of the object are output. An image of the heart of the patient includes highlighting, coloring, brightness, graphics, or other markers showing locations of the detected object. In alternative embodiments, the scan data for the segmented object with other scan data removed is used to generate the image. The segmentation is used to mask or select data to be displayed.
[0076] In yet other embodiments, the image shows a value of a quantity where the value is a function of the segmented heart region. For example, volume, volume change, cross-sectional area, ejection fraction, left ventricular mass, wall thickness, wall abnormality measure, and/or other cardiac quantification is calculated. The calculation relies on the location of the object in an image and/or over time. The medical imaging system calculates the quantity using the segmentation. The image is generated with an annotation or other display of the value or values of the quantity or quantities. The image may or may not also include a spatial representation of the heart or segmented object from the scan data.
[0077] The use of LSTM in the neural network improves the medical imaging system's ability to segment. U-net is a prevalent segmentation model in medical imaging. The performance of two proposed models (e.g., FIGS. 2 and 3) is compared with U-net (i.e., FIG. 2 without the LSTM unit 29). To improve the baseline, a sequence-to-sequence (i.e., cycle-to-cycle) segmentation with U-net is used. In this variant of U-net, 3D convolution (2D+time) is used, so that U-net also uses the temporal information to some extent.
[0078] To evaluate the predicted masks, a dice coefficient is calculated. A higher score indicates better agreement with the ground truth. For training based on images from 89 patients and an 11-test subject partition, the dice scores for these models are provided in Table 1.
wherein the map is configured to penalize learning error in the at least one training region, wherein the at least one training region comprises the at least one unidentified anatomical feature of interest in the training data set associated with the map (e.g. the machine learning model predicts a heatmap to represent the probability by location. The SoftMax feature segments by the heatmap. The masks associated with the heatmap is scored to coincide with the amount of similarity between the ground truth, which is taught in ¶ [70]-[78] above.. The SoftMax also contains a cross-entropy function that serves as a loss function, which is used to penalize dissimilarity. The cross-entropy loss is taught in ¶ [55]-[57].).
[0055] In one embodiment, a SoftMax layer or unit 29 is the last layer of the neural network architecture. Since the task is binary segmentation, the SoftMax layer 28 is added at the end of the network. The SoftMax layer 28 implements binary cross-entropy loss based on the feature values input, but other functions may be used.
[0056] In act 14 of FIG. 1, a machine (e.g., image processor, workstation, computer or server) trains the neural network arrangement with the training data having ground truth segmentation of the object. The neural network including LSTM is trained using the medical images of the object and the ground truth annotation for the object. Machine learning is performed to train the various units using the defined deep architecture. The features (e.g., convolution kernels, transposed-convolution kernels, max pooling connections, and binary mapping) that are determinative or map to the ground truth segmentation are learned. The features providing the desired result or detection of the object are learned.
[0057] The results relative to the ground truth and the error for reconstruction for the feature learning network are back-projected to learn the features that work best. In one embodiment, a L2-norm loss is used to optimize the network. Other error functions may be used. In one embodiment, the weights of the network are randomly initialized, but another initialization may be used. End-to-end training is performed, but one or more features may be set. Batch normalization, dropout, and data augmentation are not used, but may be. The optimization is with the RMSprop optimizer, but other optimization functions (e.g., Adam, SGD, etc.) may be used. During the optimization, the different distinguishing features are learned. The features providing an indication of location of the object given input medical image sequences are learned.
Re claim 2: (Original) Lu discloses the method of claim 1, wherein the machine learning model is configured to identify the anatomical feature of interest by:
using the segmentation approach to determine where, in the imaging data, the identified anatomical feature of interest is located (e.g. a segmentation approach is used to determine, in the scanned image, where the ventricle, or body part, is located within the image area. This is taught in ¶ [56]-[78] above.); and
generating an indicator for indicating where, in the imaging data, the identified anatomical feature of interest is located (e.g. an area is located and highlighted with a color or mask to identify the body part, which is taught in ¶ [74]-[78] above.).
Re claim 3: (Currently Amended) The method of claim 1, wherein at least one control location does not overlap with the anatomical feature of interest (e.g. a wall of the left ventricle or the left ventricle itself can be considered to not overlap with the center of the RV or RA. The wall or the LV can be segmented, which is taught in ¶ [76] above and [30].).
[0030] The training data includes a ground truth indication of the object. The ground truth indication is a segmentation of the object, such as a marker, trace, border, or other segmentation of the left ventricle in each image. For example, a border of the left ventricle, binary mask of heart walls forming the left ventricle, or binary mask of the entire left ventricle may be used as the ground truth. The medical images, such as short or long axis cardiac CINE slices of the heart from cardiac scanning, are physician-annotated to add the ground truth. Alternatively, automatic segmentation is applied to add the ground truth.
Re claim 4: (Currently Amended) The method of claim 1, wherein using the machine learning model comprises using an additional map as an input to the machine learning model to identify the at least one anatomical feature of interest in the received imaging data, and wherein the additional map is generated from the received imaging data (e.g. a heatmap is input and used with the process of thresholding to perform a segmentation based on the input of the heatmap, which is taught on ¶ [73] above. The detected heart region can then be output and identified based on the segmentation, which is taught in ¶ [74]-[76] above.).
[0056] In act 14 of FIG. 1, a machine (e.g., image processor, workstation, computer or server) trains the neural network arrangement with the training data having ground truth segmentation of the object. The neural network including LSTM is trained using the medical images of the object and the ground truth annotation for the object. Machine learning is performed to train the various units using the defined deep architecture. The features (e.g., convolution kernels, transposed-convolution kernels, max pooling connections, and binary mapping) that are determinative or map to the ground truth segmentation are learned. The features providing the desired result or detection of the object are learned.
Re claim 5: (Currently Amended) The method of 19, wherein a spatial overlap between adjacent training regions specified by the map generated for each training data set defines at least one prioritized training region in the training data set for the machine learning model to use to prioritize penalization of learning error in the at least one prioritized training region over penalization of learning error in: non-overlapping training regions of the training data set and/or another region of the training data set (e.g. the invention allows for training using the left ventricle and areas, such as walls, around the left ventricle as the training dataset. Other areas of the heart can be scanned as well as other parts of the body, which is taught in ¶ [23]-[30]. With the heatmap representing the probability of location and the cross-entropy loss in the system fine tuning the machine learning model, the system identifies a priority area to determine and to penalize the system when an image outside of the desired area is detected. This is taught in ¶ [55]-[57] and [73] above.).
[0023] In act 10, images of a same type of object (e.g., heart) are obtained. The images are obtained by data transfer, capture, and/or loading from memory. Any number of images of a same type of object is obtained, such as tens or hundreds of images of the object. The images are obtained with a same scanner or different scanners. The object as occurring in many different patients is included in the collection of images. Where the object occurs with different backgrounds, the images are of the object in the various backgrounds.
[0024] In one embodiment, the images are of the heart. For cardiac or heart segmentation, images of the hearts of various patients are obtained. Any number of images of each heart may be obtained. In other embodiments, the images are of other parts of the body, such as the torso or head. The object of interest in a medical image may be an organ (e.g., whole heart), part of an organ (e.g., left ventricle or other heart chamber), a cyst, a tumor, calcification, or other anomaly or lesion.
[0025] The images are captured using MR scanners. For example, gradient coils, a whole-body coil, and/or local coils generate a pulse sequence in a magnetic field created by a main magnet or coil. The whole-body coil or local coils receive signals responsive to the re-orientation of molecules shifted due to the pulse sequence. In other embodiments, the images are captured using x-ray, computed tomography, fluoroscopy, angiography, ultrasound, positron emission tomography, or single photon emission computed tomography.
[0026] The obtained images may be scan data used to generate an image on a display, such as a medical image being scan data from medical imaging. The obtained images may be from data being processed to generate an image, data formatted for display, or data that has been used to display. Scan data may be data with no or some image processing. For example, a displayed image may represent scan data after image processing. As another example, k-space data reconstructed to an object domain by a Fourier process without other filtering or change may be scan data.
[0027] The images represent volumes. Three-dimensional datasets are obtained. In alternative embodiments, two-dimensional datasets representing planes are obtained. In one embodiment, the medical images are long-axis cardiac images for 100 or other number of subjects.
[0028] For each subject or set of images, images representing the object over time are acquired. For cardiac imaging or imaging of organs that vary over time due to cardiac or breathing cycles, a sequence of images is acquired over one or more cycles. Fractional cycles may be used. For example, cardiac images for 100 subjects are acquired. For each subject, the images or sequence of images represent that subject through 1-6 cycles. For a given cycle, any number of images may be acquired, such as 18-35 sequential frames or images.
[0029] The medical images are used for training in act 14. The medical images may be used as received or may be pre-processed. In one embodiment of pre-processing, the received images are normalized. Since different settings, imaging systems, patients being scanned, and/or other variations in acquiring images may result in different offsets and/or dynamic ranges, normalization may result in more uniform representation of the object. Any normalization may be used, such as setting a maximum value to 1 with all other values linearly scaled between 0 and 1. Each volumetric scan or medical image is individually normalized.
[0030] The training data includes a ground truth indication of the object. The ground truth indication is a segmentation of the object, such as a marker, trace, border, or other segmentation of the left ventricle in each image. For example, a border of the left ventricle, binary mask of heart walls forming the left ventricle, or binary mask of the entire left ventricle may be used as the ground truth. The medical images, such as short or long axis cardiac CINE slices of the heart from cardiac scanning, are physician-annotated to add the ground truth. Alternatively, automatic segmentation is applied to add the ground truth.
Re claim 6: (Currently Amended) The method claim 1, wherein the received imaging data corresponds to a basal region of the subject's heart, and wherein the at least one anatomical feature of interest to be identified using the trained machine learning model comprises at least one anatomical interface between adjacent chambers of the subject's heart (e.g. the heart walls of the left ventricle is considered as a basal region of the heart and this area can be identified by the machine learning model through highlighting. The heart walls are considered as an area between the ventricle chambers of the heart. This is explained in ¶ [30] and [74]-[78] above.).
Re claim 7: (Currently Amended) The method of claim 1, wherein:
the at least one control location is identified based on a result of an initial segmentation model used to identify the at least one control location (e.g. the ground truth indication performed automatically by the model of the system is considered as an initial segmentation model, which is explained in ¶ [30] above. The system uses this indication to identify a ground truth, or control location, for future use in determining this location within other images.).
Re claim 8: (Original) The method of claim 7, wherein the at least one control location comprises:
a centroid of a chamber of a heart; and/or an end point and/or a junction of ventricular and/or atrial musculature defining at least one interface between respective chambers of the heart (e.g. the left ventricle walls or border of the left ventricle are considered as an end point, which is taught in ¶ [30] above.).
Re claim 9: (Currently Amended) The method of claim 1, wherein the spatial distribution of the at least one training region is defined by at least one parameter of the spatial function, wherein the at least one parameter is based on at least one dimension of at least one previously-identified anatomical feature in the training data set (e.g. the heatmap is a spatial distribution of an object based on a parameter. The parameter is a location, or coordinates, of the left ventricle. In other words, the dimension of the object or ventricle is the location within the image itself. The heat map spatial distribution is explained in ¶ [61] and [73] above.).
Re claim 15: (Currently Amended) The method of claim1, wherein a loss function used for penalizing learning error is modified by the map, wherein the loss function is based on a difference between a measured value and a ground truth value for at least one pixel or voxel of the training imaging data (e.g. the heatmap represents the probability of location of the object within an area. The loss in the system is modified based on the image scanned matching the heatmap. The system discloses pixels or voxels that are used in the segmentation output, which is taught in ¶ [44] and [54]. The ground truth is compared to the image to determine a loss to be applied to the system, which is taught in ¶ [55]-[57] above.).
[0044] The LSTM unit 29 is a recurrent neural network (RNN) structure for modeling dependencies over time. In addition to relating spatial features to the output segmentation, temporal features are included. The variance over time of pixels, voxels, or groups thereof is accounted for by the LSTM unit 29. The values of the features derived from the pixels, voxels or groups thereof may be different for different images in the sequence. The LSTM unit 29, where positioned to receive feature values rather than input image data, derives values for features based on the variance over time or differences over time (i.e., state information) of the input feature values for each node.
[0054] The neural network is defined to provide a segmentation output. Locations that are members or represent the object are identified. The segmentation may be a binary output per location or may identify a boundary. Where the spatiotemporal input data is slices or 2D planes over time (2D+t or 3D), the segmentation output is a pixel mask for each of the 2D planes. Where the spatiotemporal input data is volumes or 3D frames over time (3D+t or 4D), the segmentation output is a voxel mask for each of the 3D frames or volumes. A binary indication of membership of each pixel or voxel to the object is provided. Other segmentation outputs may be used.
Re claim 17: (Currently Amended) A non-transitory machine-readable medium (1100) storing instructions (1102) executable by at least one processor (1104), wherein the instructions are configured to cause the at least one processor to:
receive (1106) imaging data representative of a volume of a subject's anatomy, wherein the received imaging data comprises at least one unidentified anatomical feature of interest (e.g. the system receives image data that consists of a part of the body, such as a heart or an organ. This part of the body contains volume that contains a feature that is not yet identified, which is taught in ¶ [24], [30], [62] and [68] above.); and
using (1108) a machine learning model, configured to implement a segmentation approach to identify the at least one anatomical feature of interest in the received imaging data, to identify the anatomical feature of interest in the received imaging data (e.g. the machine learning model segments parts of an image to identify the aspects, such as the boundary of the object. Moreover, the segmentation allows for identification of a ventricle of an organ, which is taught in ¶ [54]-[69] above.), wherein:
the machine learning model is trained using a map generated for each of a series of training data sets, wherein each training data set comprises training imaging data representative of a volume of a training anatomy (e.g. the training data is representative of a part of a body containing a volume, which is taught in ¶ [56]-[69] above. The machine learning model is trained to generate a heatmap for the training data, which is taught in ¶ [70]-[73] above.); and
the map for each training data set is generated by a spatial function configured to specify a spatial distribution of at least one training region relative to at least one control location in the training data set associated with the map (e.g. in the map or generated area, a spatial distribution of locations where an identified part of the body is located or detecting a full segmentation, which is taught in ¶ [61] and [73]-[76] above. The location of the body part is relative to a control area of a scanned organ.),
wherein the map is configured to penalize learning error in the at least one training region, wherein the at least one training region comprises the at least one unidentified anatomical feature of interest in the training data set associated with the map (e.g. the machine learning model predicts a heatmap to represent the probability by location. The SoftMax feature segments by the heatmap. The masks associated with the heatmap is scored to coincide with the amount of similarity between the ground truth, which is taught in ¶ [70]-[78] above. The SoftMax also contains a cross-entropy function that serves as a loss function, which is used to penalize dissimilarity. The cross-entropy loss is taught in ¶ [55]-[57] above.).
Re claim 18: (Currently Amended) Lu discloses an apparatus (1200) comprising:
at least one processor communicatively coupled to an interface, wherein the interface is configured to receive imaging data representative of a volume of a subject's anatomy (e.g. in ¶ [65] and [66] above, the processor can receive image data via an interface.), wherein the received imaging data comprises at least one unidentified anatomical feature of interest (e.g. the system receives image data that consists of a part of the body, such as a heart or an organ. This part of the body contains volume that contains a feature that is not yet identified, which is taught in ¶ [24], [30], [62] and [68] above.); and
a non-transitory machine-readable medium storing instructions readable and executable by the at least one processor (e.g. ¶ [94]-[96] below discloses a CPU that reads instructions from a memory to perform the invention.), wherein the instructions are configured to cause the at least one processor to use a machine learning model, configured to implement a segmentation approach to identify the at least one anatomical feature of interest in the received imaging data, to identify the anatomical feature of interest in the received imaging data (e.g. the machine learning model segments parts of an image to identify the aspects, such as the boundary of the object. Moreover, the segmentation allows for identification of a ventricle of an organ, which is taught in ¶ [54]-[69].), wherein:
the machine learning model is trained using a map generated for each of a series of training data sets, wherein each training data set comprises training imaging data representative of a volume of a training anatomy (e.g. the training data is representative of a part of a body containing a volume, which is taught in ¶ [56]-[69] above. The machine learning model is trained to generate a heatmap for the training data, which is taught in ¶ [70]-[73].); and
the map for each training data set is generated by a spatial function configured to specify a spatial distribution of at least one training region relative to at least one control location in the training data set associated with the map (e.g. in the map or generated area, a spatial distribution of locations where an identified part of the body is located or detecting a full segmentation, which is taught in ¶ [61] and [73]-[76] above. The location of the body part is relative to a control area of a scanned organ.),
wherein the map is configured to penalize learning error in the at least one training region, wherein the at least one training region comprises the at least one unidentified anatomical feature of interest in the training data set associated with the map (e.g. the machine learning model predicts a heatmap to represent the probability by location. The SoftMax feature segments by the heatmap. The masks associated with the heatmap is scored to coincide with the amount of similarity between the ground truth, which is taught in ¶ [70]-[78] above. The SoftMax also contains a cross-entropy function that serves as a loss function, which is used to penalize dissimilarity. The cross-entropy loss is taught in ¶ [55]-[57] above.).
Re claim 20: (New) Lu discloses a non-transitory machine-readable medium storing instructions executable by at least one processor, wherein the instructions are configured to cause the at least one processor to perform the steps of the method of claim 1 (e.g. ¶ [94]-[96] discloses using a processor that is used to perform the instructions of the invention stored on a computer readable medium that is non-transitory.).
[0094] The instructions, medical images, network definition, features, machine-learnt detector, outputs, and/or other information are stored in a non-transitory computer readable memory, such as the memory 78. The memory 78 is an external storage device, RAM, ROM, database, and/or a local memory (e.g., solid state drive or hard drive). The same or different non-transitory computer readable media may be used for the instructions and other data. The memory 78 may be implemented using a database management system (DBMS) and residing on a memory, such as a hard disk, RAM, or removable media. Alternatively, the memory 78 is internal to the processor 76 (e.g. cache).
[0095] The instructions for implementing the training or application processes, the methods, and/or the techniques discussed herein are provided on non-transitory computer-readable storage media or memories, such as a cache, buffer, RAM, removable media, hard drive or other computer readable storage media (e.g., the memory 78). Computer readable storage media include various types of volatile and nonvolatile storage media. The functions, acts or tasks illustrated in the figures or described herein are executed in response to one or more sets of instructions stored in or on computer readable storage media. The functions, acts or tasks are independent of the particular type of instructions set, storage media, processor or processing strategy and may be performed by software, hardware, integrated circuits, firmware, micro code and the like, operating alone or in combination.
[0096] In one embodiment, the instructions are stored on a removable media device for reading by local or remote systems. In other embodiments, the instructions are stored in a remote location for transfer through a computer network. In yet other embodiments, the instructions are stored within a given computer, CPU, GPU or system. Because some of the constituent system components and method steps depicted in the accompanying figures may be implemented in software, the actual connections between the system components (or the process steps) may differ depending upon the manner in which the present embodiments are programmed.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 5, 10, 12,16, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Golden (US Pub 2018/0259608).
Re claim 10: (Currently Amended) However, Lu fails to specifically teach the features of the method of claim 1, wherein the spatial function comprises a first Gaussian-based function centered about an origin defined by the at least one control location in the training data set, wherein the spatial distribution of the at least one training region defined by the first Gaussian-based function is distal from the origin.
However, this is well known in the art as evidenced by Golden. Similar to the primary reference, Golden discloses volume segmentation (same field of endeavor or reasonably pertinent to the problem).
Golden discloses wherein the spatial function comprises a first Gaussian-based function centered about an origin defined by the at least one control location in the training data set, wherein the spatial distribution of the at least one training region defined by the first Gaussian-based function is distal from the origin (e.g. a gaussian distribution is created and is centered on the position of the ground truth landmark. There is a standard deviation from the centroid of the landmark can be different depending on the landmark and the anatomy of the heart, which is taught in ¶ [24], [222]-[224].).
[0024] In the 4D Flow workflow of a cardiac imaging application, the user may be required to define the regions of different landmarks in the heart in order to see different cardiac views (e.g., 2CH, 3CH, 4CH, SAX) and segment the ventricles. The landmarks required to segment the LV and see 2CH, 3CH, and 4CH left heart views include LV apex, mitral valve, and aortic valve. The landmarks required to segment the RV and see the corresponding views include RV apex, tricuspid valve and pulmonary valve.
[0222] At 2810, the label maps are defined which encode the annotation information in a way understandable by the neural network which will be used in later stages. The position of a landmark is encoded by indicating, at each position in the 3D volume, how likely the position is to be at the landmark position. To do so, a 3D Gaussian probability distribution is created, centered on the position of the ground truth landmark with standard deviation corresponding to observed inter-rater variability of that type of landmark across all the training data.
[0223] To understand inter-rater variability, consider one specific landmark such as the LV apex. For every study in which the LV Apex was annotated by more than one user or “rater,” the standard deviation of the LV Apex coordinates across all users is computed. By repeating this process for each landmark, the standard deviation for Gaussian used to encode each landmark is defined. This process allows for the setting of this parameter in a principled manner. Among the different advantages of using this approach, it is note that the standard deviation is different for each landmark, and depends on the complexity of locating the landmark. Specifically, more difficult landmarks have larger Gaussian standard deviation in the target probability maps. Further, the standard deviation is different along the x, y, and z axis, reflecting the fact that the uncertainty might be larger along one direction rather than another because of the anatomy of the heart and/or the resolution of the images.
[0224] Note that alternative strategies may also be used to define the standard deviation (arbitrary value, parameter search) and may lead to comparable results. FIG. 29 shows this transition from a landmark position, identified with a cross 2902 in a view 2904, to a Gaussian 2906 in a view 2908 evaluated on the image for the 2D case.
[0225] At 2812, once the 3D volumes have been defined for both the MM and the label map, the images are prepocessed. Generally, the goal is to normalize the images size and appearance for future training.
Therefore, in view of Golden, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the spatial function comprises a first Gaussian-based function centered about an origin defined by the at least one control location in the training data set, wherein the spatial distribution of the at least one training region defined by the first Gaussian-based function is distal from the origin, incorporated in the device of Lu, in order to have a spatial distribution utilizing a gaussian function with the spatial distribution being distal from an origin , which optimize images by normalization of the size and appearance for future training (as stated in Golden ¶ [225]).
Re claim 12: (Currently Amended) Lu discloses the method of claim 10, wherein the volume comprises at least part of a heart (e.g. a heart volume is a part of the area to be segmented, which is taught in ¶ [23]-[30] above.).
However, Lu fails to specifically teach the features of and wherein the first Gaussian-based function is centered at a centroid of at least one chamber of the heart.
However, this is well known in the art as evidenced by Golden. Similar to the primary reference, Golden discloses volume segmentation (same field of endeavor or reasonably pertinent to the problem).
Golden discloses and wherein the first Gaussian-based function is centered at a centroid of at least one chamber of the heart (e.g. a gaussian distribution is created and is centered on the position of the ground truth landmark. There is a standard deviation from the centroid of the landmark can be different depending on the landmark and the anatomy of the heart, which is taught in ¶ [24], [222]-[224] above. With the landmark able to be the LV within the primary reference and the secondary reference able to be trained to identify the landmark by being set on the centroid, the combination od references performs the features of this claim.).
Therefore, in view of Golden, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of and wherein the first Gaussian-based function is centered at a centroid of at least one chamber of the heart, incorporated in the device of Lu, in order to have a spatial distribution utilizing a gaussian function with the spatial distribution being distal from an origin , which optimize images by normalization of the size and appearance for future training (as stated in Golden ¶ [225]).
Re claim 16: (Currently Amended) Lu discloses the method of claim 19, comprising training the machine learning model by:
receiving at least one of the series of training data sets and an indication of a ground truth identifying the anatomical feature of interest in each of the training data sets (e.g. the system discloses receiving a group of images that can be used as a dataset for training the machine learning model. The ground truth can be used to identify a segmentation or an area of interest related to an anatomical feature within the image, which is taught in ¶ [23]-[30] above.);
determining the at least one control location in the at least one training data set (e.g. the system determines an area, such as a wall, border or whole area of a ventricle within the training data, which is taught in ¶ [23]-[30] above.);
generating the map for the at least one training data set by using the spatial function to generate a set of loss values, wherein the set of loss values indicate the spatial distribution of the at least one training region, and wherein the set of loss values are indicative of a loss function to apply to each pixel or voxel of the training data set to penalize learning error at the respective pixel or voxel (e.g. a heatmap is generated that is used to represent a spatial distribution of probable locations of the object, which is taught in ¶ [61], [76] above and [92]. The cross-entropy loss is calculated with the comparison between the ground truth and the measured input of an image in the SoftMax layer, which is taught in ¶ [55]-[57] above. The images of pixels or voxels are used in the training in order to convey the segmentation output or a used in the binary indication of the pixel or voxel to an object, which s taught in ¶ [44] and [54] above. This is used in the loss calculation in ¶ [55]-[57] above.); and
[0092] The image processor 76 may be configured to output an image showing spatial distribution of the object. A sequence of images showing the spatial distribution of the object over time may be output. In other embodiments, the spatial distribution is used to calculate a value for a quantification. The value is output in an image.
training the machine learning model using the at least one training data set of the series and the corresponding map for the at least one training data set (e.g. the invention uses a data set that is different images of the heart area, which is taught in ¶ [23]-[30] above. With the heatmap representing the probability of the location matching the desired result, any mismatch between the prediction and the measured value can be backpropagated, which is taught in [55]-[57], [61] and [73] above.).
Re claim 19: (New) Lu discloses the method of claim 1,
wherein each training data set comprises training imaging data representative of a volume of a training anatomy (e.g. the training data includes data with a volume of a body part that is recognized, which is taught in ¶ [23]-[30] above.); and
wherein the map for each training data set is generated by a spatial function configured to specify a spatial distribution of at least one training region relative to at least one control location in the training data set associated with the map, wherein the map is configured to penalize learning error in the at least one training region (e.g. the heatmap is generated to represent a spatial distribution of the location of the desired segmentation of the predicted body part. This segmentation is in relation to the overall area of the body part that is scanned. For example, the walls of the LV can be highlighted in relation to the area of the LV as a whole that is within the location of the desired segmented location, which is taught in ¶ [56]-[78] above.),
wherein the at least one training region comprises the at least one unidentified anatomical feature of interest in the training data set associated with the map (e.g. if the training region contains the left ventricle area, it will contain the unidentified area of the left ventricle walls that are also present that are not identified in the ground truth, which is taught in ¶ [54]-[78] above.).
However, Lu fails to specifically teach the features of wherein the machine learning model is trained using a map generated for each of a series of training data sets.
However, this is well known in the art as evidenced by Golden. Similar to the primary reference, Golden discloses volume segmentation (same field of endeavor or reasonably pertinent to the problem).
Golden discloses wherein the machine learning model is trained using a map generated for each of a series of training data sets (e.g. the invention discloses training with the use of a map generated for a set of training data that is used to train a machine learning model, which is taught in ¶ [222]-[225].).
[0222] At 2810, the label maps are defined which encode the annotation information in a way understandable by the neural network which will be used in later stages. The position of a landmark is encoded by indicating, at each position in the 3D volume, how likely the position is to be at the landmark position. To do so, a 3D Gaussian probability distribution is created, centered on the position of the ground truth landmark with standard deviation corresponding to observed inter-rater variability of that type of landmark across all the training data.
[0223] To understand inter-rater variability, consider one specific landmark such as the LV apex. For every study in which the LV Apex was annotated by more than one user or “rater,” the standard deviation of the LV Apex coordinates across all users is computed. By repeating this process for each landmark, the standard deviation for Gaussian used to encode each landmark is defined. This process allows for the setting of this parameter in a principled manner. Among the different advantages of using this approach, it is note that the standard deviation is different for each landmark, and depends on the complexity of locating the landmark. Specifically, more difficult landmarks have larger Gaussian standard deviation in the target probability maps. Further, the standard deviation is different along the x, y, and z axis, reflecting the fact that the uncertainty might be larger along one direction rather than another because of the anatomy of the heart and/or the resolution of the images.
[0224] Note that alternative strategies may also be used to define the standard deviation (arbitrary value, parameter search) and may lead to comparable results. FIG. 29 shows this transition from a landmark position, identified with a cross 2902 in a view 2904, to a Gaussian 2906 in a view 2908 evaluated on the image for the 2D case.
[0225] At 2812, once the 3D volumes have been defined for both the MM and the label map, the images are prepocessed. Generally, the goal is to normalize the images size and appearance for future training.
Therefore, in view of Golden , it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the machine learning model is trained using a map generated for each of a series of training data sets, incorporated in the device of Lu, in order to train a model using a map with training data sets, which can aid in the network converging more rapidly (as stated in Golden ¶ [232]-[234]).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu, as modified by Golden, as applied to claim 10 above, and further in view of Common Knowledge of Gaussian Weighted Projections (Official Notice).
Re claim 11: (Original) However, Lu fails to specifically teach the features of the method of claim 10, wherein the first Gaussian-based function comprises an inverted Gaussian function.
However, this is well known in the art as evidenced by Golden. Similar to the primary reference, Golden discloses volume segmentation (same field of endeavor or reasonably pertinent to the problem).
Golden discloses wherein the first Gaussian-based function (e.g. a gaussian distribution is created and is centered on the position of the ground truth landmark, which is taught in ¶ [24], [222]-[224] above.).
Therefore, in view of Golden, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the first Gaussian-based function, incorporated in the device of Lu, in order to have a spatial distribution utilizing a gaussian function with the spatial distribution being distal from an origin , which optimize images by normalization of the size and appearance for future training (as stated in Golden ¶ [225]).
However, the combination above fails to specifically teach the features of comprises an inverted Gaussian function.
However, this is well known in the art as evidenced by Common knowledge of Gaussian Weighted projection (Official Notice). Similar to the primary reference, Gaussian Weighted Projection discloses displaying the different visualization of the image (same field of endeavor or reasonably pertinent to the problem).
Gaussian Weighted Projection discloses comprises an inverted Gaussian function (e.g. an inverted Gaussian function can be utilized to show different aspects of the Gaussian to highlight certain areas within the image that would be viewed differently under a normal gaussian function.).
Therefore, in view of Gaussian Weighted Projection, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of comprises an inverted Gaussian function., incorporated in the device of Lu, as modified by Golden, in order to utilize an inverted Gaussian function, which can provide a method to differentiate different materials based on the distribution based on the Gaussian Function.
Claim(s) 13 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Lu, in view of Golden and Schobs (NPL titled “Confidence-Quantifying Landmark localization for Cardiac MRI” Pub Date: 5/2021)
Re claim 13: (Currently Amended) However, Lu fails to specifically teach the features of the method of claim 1, wherein the spatial function comprises a second Gaussian-based function specifying a spatial distribution indicating the at least one training region associated with the second Gaussian-based function, wherein the spatial distribution indicating the at least one training region associated with the second Gaussian-based function overlaps adjacent control locations in the training data set.
However, an aspect of this is well known in the art as evidenced by Golden. Similar to the primary reference, Golden discloses volume segmentation (same field of endeavor or reasonably pertinent to the problem).
Golden discloses wherein the spatial function comprises a Gaussian-based function specifying a spatial distribution indicating the at least one training region associated with the Gaussian-based function (e.g. a gaussian distribution is created and is centered on the position of the ground truth landmark. There is a standard deviation from the centroid of the landmark can be different depending on the landmark and the anatomy of the heart, which is taught in ¶ [24], [222]-[225].).
Therefore, in view of Golden, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the spatial function comprises a Gaussian-based function specifying a spatial distribution indicating the at least one training region associated with the Gaussian-based function, incorporated in the device of Lu, in order to have a spatial distribution utilizing a gaussian function with the spatial distribution being distal from an origin, which optimize images by normalization of the size and appearance for future training (as stated in Golden ¶ [225]).
However, the combination above fails to specifically teach the features of wherein the spatial function comprises a second Gaussian-based function specifying a spatial distribution indicating the at least one training region associated with the second Gaussian-based function, wherein the spatial distribution indicating the at least one training region associated with the second Gaussian-based function overlaps adjacent control locations in the training data set.
However, this is well known in the art as evidenced by Schobs. Similar to the primary reference, Schobs discloses heatmaps (same field of endeavor or reasonably pertinent to the problem).
Schobs discloses wherein the spatial function comprises a second Gaussian-based function specifying a spatial distribution indicating the at least one training region associated with the second Gaussian-based function, wherein the spatial distribution indicating the at least one training region associated with the second Gaussian-based function overlaps adjacent control locations in the training data set (e.g. the prior reference of Golden (i.e. ¶ 170 and 171) contains a Gaussian function that is used to create a distribution. The Schobs reference is used to have a second Gaussian function that is used to specify heatmaps, or a spatial distribution, indicating the training region used to train to learn the recognition of a landmark. This is taught in section 2 under Methods in the first paragraph. The reference shows in figure 1(a), (b) and (c) a spatial distribution of different areas that have landmarks that overlap in the same areas that are associated with the Gaussian heatmap. This is explained on page 3 in section 2 labeled Methods. This Gaussian function added with the previous Gaussian function of the secondary reference performs the features of the claims.).
Therefore, in view of Schobs, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of wherein the spatial function comprises a second Gaussian-based function specifying a spatial distribution indicating the at least one training region associated with the second Gaussian-based function, wherein the spatial distribution indicating the at least one training region associated with the second Gaussian-based function overlaps adjacent control locations in the training data set, incorporated in the device of Lu, as modified by Golden, in order to making use of the dataset available using the heatmap of training areas that overlap using a different Gaussian function, which can increase robustness against misidentifications while using a smaller training data set (as stated in Schobs page 2 paragraph 3 in the second column.).
Re claim 14: (Original) Lu discloses the method of claim 13, wherein the volume comprises at least part of a heart (e.g. a part of a heart is used for the volume of the training data, which is taught in ¶ [23]-[30] above.).
However, Lu fails to specifically teach the features of and wherein the spatial distribution of the at least one training region defined by second Gaussian-based function comprises a line connecting adjacent end points and/or junctions of ventricular and/or atrial musculature defining at least one interface between respective chambers of the heart.
However, this is well known in the art as evidenced by Schobs. Similar to the primary reference, Schobs discloses heatmaps (same field of endeavor or reasonably pertinent to the problem).
Schobs discloses and wherein the spatial distribution of the at least one training region defined by second Gaussian-based function comprises a line connecting adjacent end points and/or junctions of ventricular and/or atrial musculature defining at least one interface between respective chambers of the heart (e.g. as seen in figure 1, lines connecting adjacent end points can be seen during the training of regressing the Gaussian heatmap centered around a landmark containing a patch. This is explained on page 3 in the Methods section.).
Therefore, in view of Schobs, it would have been obvious to one of ordinary skill before the effective filing date of the claimed invention was made to have the feature of and wherein the spatial distribution of the at least one training region defined by second Gaussian-based function comprises a line connecting adjacent end points and/or junctions of ventricular and/or atrial musculature defining at least one interface between respective chambers of the heart, incorporated in the device of Lu, as modified by Golden, in order to making use of the dataset available using the heatmap of training areas that overlap using a different Gaussian function, which can increase robustness against misidentifications while using a smaller training data set (as stated in Schobs page 2 paragraph 3 in the second column.).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Gaussian Weighted Projection for Visualization of Cardiac Calcification shows different Gaussian distribution functions of an image.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAD S DICKERSON whose telephone number is (571)270-1351. The examiner can normally be reached Monday-Friday 10AM-6PM EST..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached at 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHAD DICKERSON/ Primary Examiner, Art Unit 2683