DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1-10 are pending in this application. Claims 1-9 are withdrawn, and Claims 10-20 have
been examined on the merits.
Claim Objections
Claim 1 objected to because of the following informalities:
Claim 1, line 8 “a attenuation coefficient” should be “an attenuation coefficient”
Claim 1 line 10 “a attenuation coefficient” should be “an attenuation coefficient”
Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1 and 2 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (CN112927212A) as translated in view of Walsum (US20200222018A1).
Regarding claim 1,
Zhang teaches a method for calculating an index of plaque attenuation (IPA) of an intravascular optical coherence tomography (OCT) image, wherein the method comprises: acquiring an intravascular OCT image by scanning intravascular blood vessels through an OCT device (corresponding disclosure in at least [10], where an OCT image is acquired “an automatic recognition and analysis method of OCT cardiovascular plaques based on deep learning. The 3D OCT images are reconstructed and segmented in three dimensions, and the quantitative index IPA is obtained, which is finally based on IPA classifies plaques”);
determining a plaque region of the intravascular OCT image by using a target convolutional neural network (corresponding disclosure in at least [12], where the method determines a plaque region based on the OCT image “A method for automatic recognition and analysis of OCT cardiovascular plaques based on deep learning”);
and determining an IPA of the intravascular OCT image according to the attenuation coefficient of the intravascular OCT image (corresponding disclosure in at least [66], where the IPA is calculated based on the attenuation coefficient “According to the segmentation result A, calculate the plaque attenuation index IPA to obtain the plaque category. The specific method of result analysis is: Calculate and input the segmentation result A according to the calculation formula of the segmentation result A; Output patch category; Calculate the value of the patch attenuation coefficient IPA”).
Zhang does not teach determining a attenuation coefficient of the intravascular OCT image by using the target convolutional neural network, wherein the light attenuation coefficient of the intravascular OCT image does not comprise a attenuation coefficient of the calcified plaque region.
Walsum, in a similar field of endeavor, teaches a similar concept (detection of calcified plaque) of determining a calcified plaque region (corresponding disclosure in at least [0227], where calcified plaque region is determined “ Next to that, the vessel model might contain clinical relevant information, such as for example location and percentage of vessel obstruction… and amount of calcified plaque”) and determining a attenuation coefficient of the intravascular OCT image by using the target convolutional neural network, wherein the light attenuation coefficient of the intravascular OCT image does not comprise a attenuation coefficient of the calcified plaque region (corresponding disclosure in at least [0293], where an attenuation coefficient is determined of a region not comprising the calcified plaque region (another radiopaque region), which is determined through neural network techniques (automatic detection process detection of the region through a neural network) “The area of the calcified plaque (3010, 3206) can be calculated by manual and/or (semi)automatic detection of the calcified plaque (calcified plaque region) with the enhanced image (3009, 3204). Videodensitometric analysis can also be performed. The volume and/or mass of the calcified plaque can be derived by comparing the density of the calcified plaque region to another radiopaque region within the enhanced image or the X-ray image data (2801). When knowing the properties of the radiopaque region, such as geometry and its (mass) attenuation coefficient”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated determining an attenuation coefficient not comprising the calcified plaque region as taught by Walsum. One of the ordinary skill in the art would have been motivated to incorporate this because the attenuation coefficient provides a control baseline to compare the calcified plaque region to.
Regarding Claim 2, Zhang and Walsum teach the limitations of Claim 1, and Zhang further teaches wherein the step of determining a calcified plaque region of the intravascular OCT image comprises: processing the intravascular OCT image by using a target convolutional neural network to determine the calcified plaque region of the intravascular OCT image (corresponding disclosure in at least [8], where the CNN is used for determining the segmented area (the plaque region) “These methods are mainly divided into methods that focus on the use of deep learning, such as AlexNet, GoogleNet, VGG-Net, fasterR-CNN, DenseNet, ResNet, etc., on each depth section, segment the patch, and then extract the deep layer of the segmented area The features are then classified”).
Claims 3-8 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (CN112927212A) as translated and Walsum (US20200222018A1) as applied in Claim 1, in view of Sadoughi (US20210192717A1) and in further view of Li (US20200226422A1).
Regarding Claim 3, Zhang and Walsum teaches all of the limitations of Claim 2 and Zhang teaches acquiring a region of interest of the intravascular OCT training image, wherein the region of interest is used for representing a calcified plaque region in the intravascular OCT training image (corresponding disclosure in at least [8], where the methods focus on segmented regions (region of interest) to focus on a plaque region in the image “These methods are mainly divided into methods that focus on the use of deep learning, such as AlexNet, GoogleNet, VGG-Net, fasterR-CNN, DenseNet, ResNet, etc., on each depth section, segment the patch, and then extract the deep layer of the segmented area The features are then classified”).
Zhang does not teach wherein the target convolutional neural network is obtained through training by using the following method: processing an intravascular OCT training image by using a convolutional neural network to be trained to generate a first feature map; acquiring a texture feature matrix of a calcified plaque region in the intravascular OCT training image; generating a predicted mask according to the first feature map and the texture feature matrix; and training the convolutional neural network to be trained according to the predicted mask, the region of interest, and a standard mask to generate the target convolutional neural network wherein the predicted mask is a predicted value, the standard mask is an actual value, and the region of interest is used for improving a learning capability of a loss function of the convolutional neural network to be trained for edge structure information of the calcified plaque region.
Sadoughi, in a similar field of endeavor, teaches a similar concept (detection of plaque in OCT images), wherein the target convolutional neural network is obtained through training by using the following method: processing an intravascular OCT training image by using a convolutional neural network to be trained to generate a first feature map (corresponding disclosure in at least [0055], where training data is developed from the CNN a training dataset for the neural network (e.g., the convolutional neural network 402 of FIG. 4) may be developed by first determining an overlap of a plurality of provisional TCFA regions in OCT images of an initial dataset” and further in [0046], where feature maps are outputted “The output of each layer may include a plurality of feature maps 404 which may be understood as differently filtered versions of an input image”);
acquiring a texture feature matrix of a calcified plaque region in the intravascular OCT training image (corresponding disclosure in at least [0046], where a matrix of the pixel intensity (the texture feature) is acquired; it is disclosed in the Specifications of the present application that texture feature is defined as “one or more of energy, inertia, entropy, and correlation”; “The input image 410, and each image of the feature maps 404, may be represented as a matrix of pixel intensity values. The matrices of pixel intensity values may be understood as the data which may be used by the convolutional neural network 402”)
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have incorporated processing the OCT training image with the CNN and acquiring a texture feature matrix of the plaque region in the training image into the teachings of Zhang. One of the ordinary skill in the art would have been motivated to incorporate this because CNN is commonly used for image segmentation due to its advantages of object recognition regardless of the image position, and features, such as the texture feature, provide assistance in training the network for the identification.
The combined references donot teach generating a predicted mask according to the first feature map and the texture feature matrix; and training the convolutional neural network to be trained according to the predicted mask, the region of interest, and a standard mask to generate the target convolutional neural network, wherein the predicted mask is a predicted value, the standard mask is an actual value, and the region of interest is used for improving a learning capability of a loss function of the convolutional neural network to be trained for edge structure information of the calcified plaque region.
Li, in a similar field of endeavor, teaches a similar concept (region based segmentation) of generating a predicted mask according to the first feature map and the texture feature matrix (corresponding disclosure in at least [0138], where the predicted mask is generated according to the intensity feature (texture feature) “wherein the predicted mask is a predicted value, the standard mask is an actual value, and the region of interest is used for improving a learning capability of a loss function of the convolutional neural network to be trained for edge structure information of the calcified plaque region” and [0092], where matrices can be generated “ the software modules 67 can include software such as preprocessing software, transforms, matrices, and other software-based components that are used to process image data or respond to patient triggers to facilitate co-registration of different types of image data by other software-based components 67 or to otherwise perform annotation of image data to generate ground truths and other software, modules, and functions suitable for implementing various embodiments of the disclosure”);
and training the convolutional neural network to be trained according to the predicted mask, the region of interest, and a standard mask to generate the target convolutional neural network (corresponding disclosure in at least [0124], where a region of interest segmentation is used alongside a ground truth (standard) and predicted mask “convolutional neural network suitable for training using annotated ground truth masks and generating probability maps suitable for assessing predictive outputs and showing classified image data with classified regions/features of interest”),
wherein the predicted mask is a predicted value, the standard mask is an actual value, and the region of interest is used for improving a learning capability of a loss function of the convolutional neural network to be trained for edge structure information of the calcified plaque region (corresponding disclosure in at least [0043] and [0141], where loss function is calculated for the training and test data, which correspond to the structure of the plaque “The loss function refers to the actual cross-entropy loss calculated from prediction and ground truth as shown in FIG. 2B. Mismatch percentage is the percentage of misclassified pixel in each image comparing to ground truth”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated a predicted mask, training the convolutional neural network to be trained according to the predicted mask, the region of interest, and a standard mask to generate the target convolutional neural network, and using the standard, the predicted mask, and region of interest for improving a learning capability of a loss function of the convolutional neural network to be trained for edge structure information of the calcified plaque region into the teachings of the combined references. One of the ordinary skill in the art would have been motivated to incorporate this because predictive masks are common in CNN for segmentation and image classification when training the model, in this instance, the plaque boundary identification. The loss function of the CNN is used for quantifying the differences between the predicted and actual mask, which is known in the art for further improving the model accuracy.
Regarding Claim 4, the combined references noted above teach the method according to claim 3, and additionally Zhang teaches determining the region of interest (corresponding disclosure in at least [8], but does not teach determining the intravascular OCT training image according to a pixel corresponding to the largest light attenuation coefficient and acquiring a plurality of A-lines of the intravascular OCT training image.
Sadoughi teaches a similar concept of determining the region of interest of the intravascular OCT training image according to a pixel corresponding to the largest light attenuation coefficient (Corresponding disclosure in at least [0038], where the pixel intensity of the OCT image is determined, so the largest can be selected “The output of the subtract mean function 304 may then be passed to a rescale function 306, where a standard deviation of the mean-subtracted pixel intensity values may be determined, and then used to rescale the mean-subtracted pixel intensity values”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated determining the region of interest of the OCT image according to the pixel corresponding to the largest light attenuation coefficient into the teachings of Zhang. One of the ordinary skill in the art would have been motivated to incorporate this because the region of highest light attenuation would indicate the plaque region, which is the primary focus of the user.
The combined references do not teach acquiring a plurality of A-lines of the intravascular OCT training image.
Li, in a similar field of endeavor, teaches acquiring a plurality of A-lines of the intravascular OCT training image (corresponding disclosure in at least [0012], where there are multiple scan lines acquired from the image “the image data used with systems and methods disclosed herein includes carpet view images, scan lines”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the acquiring of A-lines and determining the region of interest based on the A-lines. One of the ordinary skill in the art would have been motivated to incorporate this because A-lines correspond to each individual line of the image because doing so wouldallow the user to not only view the superficial image, but the amplitude spikes, which provides more detail and precision.
Regarding Claim 5, the combined references noted above teach the method according to Claim 3 and generating a predicted mask according to the first feature map and the texture feature matrix (corresponding disclosure in at least [0124] of Li) and splicing the first feature map and the texture feature matrix to generate a second feature map (corresponding disclosure in at least [0041] of Li, where the CNN is a multi-layer network (splicing) “a multi-layer neural network architecture suitable for training using ground truth annotations and generating predictive results according to an illustrative embodiment of the disclosure” and further in [0124] of Li, where the multi-layer network generates predictive results (second feature map) “The multi-layer neural network architecture 115 of FIG. 2 is suitable for training using ground truth annotations and generating predictive results. The neural network architecture of FIG. 2 may be trained with ground truth masks for various types/classes (calcium, lumen, media, intima, and the numerous groups of others disclosed herein”);
The combined references do not teach performing dimensionality reduction.
Sadoughi, in a similar field of endeavor, teaches performing dimensionality reduction ( corresponding disclosure in at least [0048], where pooling (dimensionality reduction) is performed “Pooling may be performed in order to reduce a dimensionality (e.g., size) of each feature map 404 while retaining or increasing certainty of feature identification. By pooling, a number of parameters and computations in the neural network 402 may be reduced, thereby controlling for overfitting, and a certainty of feature identification may be increased”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have incorporated the dimensionality reduction into the teachings of the combined references. One of the ordinary skill in the art would have been motivated to incorporate this because it reduces the parameters and computations in the network to increase controlling of overfitting and certainty of feature identification (corresponding disclosure in [0048] of Sadoughi).
Regarding Claim 6, the combined references noted above teach the method according to Claim 5 and performing dimensionality reduction (corresponding disclosure in at least [0046] of Sadoughi) and performing the dimensionality reduction on the second feature map by using three 1x 1 convolutional layers (corresponding disclosure in at least [0049], where there are three feature maps (second feature map included) which undergoes the dimensionality reduction “following the first convolution, three feature maps 404 may be produced… Following the first pooling operation, the size of each feature map 404 may be reduced”).
Sadoughi discloses the claimed invention except for the 1x1 convolutional layers. It would have been an obvious to have modified the reduction to specifically 1x1 dimensions, since the prior art teaches various reductions in dimensionality (see Fig. 4 of Sadoughi) that still allow for feature identification as needed, and since such a modification would have involved a mere change in the size of a component. A change in size is generally recognized as being within the level of ordinary skill in the art. See MPEP 2144.04(IV)(A). Additionally, Applicant has not provided criticality for the particular dimensions claimed.
Regarding Claim 7, the combined references noted above teach the method according to Claim 3 and acquiring a texture feature matrix of a calcified plaque region in the intravascular OCT training image (corresponding disclosure in at least [0046] of Sadoughi) and further teaches determining a spatial gray-level co-occurrence matrix of the intravascular OCT training image (corresponding disclosure in at least [0046] of Sadoughi, where the pixel intensity values (grey level co-occurrence) are displayed as a matrix based on the image “The input image 410, and each image of the feature maps 404, may be represented as a matrix of pixel intensity values. The matrices of pixel intensity values may be understood as the data which may be used by the convolutional neural network 402”);
determining at least one texture feature of the intravascular OCT training image according to the spatial gray-level co-occurrence matrix; and determining the texture feature matrix according to the texture feature (corresponding disclosure in at least [0046] of Sadoughi, where the intensity values (the texture feature) are determined “The input image 410, and each image of the feature maps 404, may be represented as a matrix of pixel intensity values. The matrices of pixel intensity values may be understood as the data which may be used by the convolutional neural network 402”).
Regarding Claim 8, the combined references noted above teach the method according to Claim 7 and wherein the at least one texture feature comprises: one or more of energy, inertia, entropy, and correlation (corresponding disclosure in at least [0032] of Sadoughi, where intensity (texture feature) is displayed/calculated “The acquired data includes one or more imaging parameters calculated for each pixel, or group of pixels (for example, a group of pixels assigned the same parameter value), of the display, where the one or more calculated image parameters includes one or more of an intensity”).
Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (CN112927212A) and Walsum (US20200222018A1) as applied in Claim 1 and in further view of Sadoughi (US20210192717A1).
Regarding Claim 9, Zhang and Walsum teach all of the limitations of Claim 1 and Zhang further teaches a device for calculating an IPA of an intravascular OCT image (corresponding disclosure in at least [10], where an OCT image is acquired “an automatic recognition and analysis method of OCT cardiovascular plaques based on deep learning. The 3D OCT images are reconstructed and segmented in three dimensions, and the quantitative index IPA is obtained, which is finally based on IPA classifies plaques”).
Zhang does not specify wherein the device comprises a processor and a memory, the memory is configured to store a computer program, and the processor is configured to call the computer program from the memory and run the computer program.
Sadoughi, in a similar field of endeavor, teaches wherein the device comprises a processor and a memory, the memory is configured to store a computer program (corresponding disclosure in at least [0026], where there is a system with a processor, which needs to be stored in some memory “The scanner 106 may be communicably coupled to a system controller 102 that may be part of a single processing unit, or processor, or distributed across multiple processing units. The system controller 102 is configured to control operation of the system 100”), and the processor is configured to call the computer program from the memory and run the computer program (corresponding disclosure in at least [0028], where there is a storage medium for the program “The storage device 108 may include any known data storage medium, for example, a permanent storage medium, removable storage medium, and the like. Additionally, either or both of the memory 104 and the storage device 108 may be a non-transitory storage medium”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have specified a processor and a memory, the memory is configured to store a computer program, and the processor is configured to call the computer program from the memory and run the computer program into the teachings of Zhang. One of the ordinary skill in the art would have been motivated to incorporate this because to execute any computer programs, a storage for the program or network is required, alongside a processor for execution.
Regarding Claim 10, Zhang and Walsum teach all of the limitations of Claim 1.
Zhang does not specify a non-transient computer-readable storage medium, wherein the non-transient computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, enables the processor to implement the method.
Sadoughi, in a similar field of endeavor, teaches a non-transient computer-readable storage medium, wherein the non-transient computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, enables the processor to implement the method (corresponding disclosure in at least [0028], where the storage includes a storage medium “The storage device 108 may include any known data storage medium, for example, a permanent storage medium, removable storage medium, and the like. Additionally, either or both of the memory 104 and the storage device 108 may be a non-transitory storage medium” and further in [0065], where the method is carried out via the storage medium “Specifically, method 900 may be carried out via the controller 102, and may be stored as executable instructions at a non-transitory storage medium, such as the memory 104 or the storage device 108”).
It would have been obvious to a person having ordinary skill in the art before the effective filing date to have specified a non-transient computer-readable storage medium storing a computer program, processor and a memory as claimed into the teachings of Zhang. One of the ordinary skill in the art would have been motivated to incorporate this because to execute any computer programs, a storage for the program or network is required, alongside a processor for execution.
Response to Arguments
Applicant's arguments filed 11/17/25 regarding the Claim objections have been fully
Considered, but new objections are added in light of the amendments.
Applicant's arguments filed 11/17/25 regarding the 35 U.S.C. 101 rejections have been fully
considered and are withdrawn in light of the amendments.
Applicant’s arguments with respect to claim 1 regarding the 35 U.S.C. 102 (a)(1) rejection has been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KAITLYN KIM whose telephone number is (571)272-1821. The examiner can normally be reached Monday-Friday 6-2 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Kozak can be reached at (571) 270-0552. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.E.K./Examiner, Art Unit 3797
/SERKAN AKAR/Primary Examiner, Art Unit 3797