DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s response, filed 30 October 2025, to the last office action has been entered and made of record.
In response to the amendments to the claims, they are acknowledged, supported by the original disclosure, and no new matter is added.
Amendments to the independent claims 1, 10, and 15 have necessitated a new ground of rejection over the applied prior art. Please see below for the updated interpretations and rejections.
Response to Arguments
Applicant's arguments filed 30 October 2025 have been fully considered but they are not persuasive.
In response to Applicant’s arguments on p. 9-10 of Applicant’s reply, that the teachings of Cai does not teach a selected point in the image or label map pair used for training the CNN segmentation model, the Examiner respectfully disagrees.
Examiner notes the claims are treated with their broadest reasonable interpretations consistent with the specification. See MPEP 2111. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Furthermore, the test for obviousness is what the combined teachings of the references would have suggested to those of ordinary skill in the art. See In re Keller, 642 F.2d 413, 208 USPQ871 (CCPA 1981).
Claims 1, 10, and 15 recite the amended claim limitation of “a second image of each of the one or more pairs comprising an annotated point selected based on a segmentation mask of the anatomical object”.
In regards to the claim subject matter of a selected annotated point based on a segmentation mask of the anatomical object, the specification of the Application describes, during training of the extraction network for lesion tracking,
“[H]aving a point of interest 304 selected in the baseline medical image 302, the 4D embedding maps (that are iteratively adapted by the model during training) are used to extract the 3D vector embedding for that particular point of interest 304” (see specification [0032]),
“[I]n one embodiment, annotated training CT image 402 may be leveraged to enable a hard and diverse negative sampling 418. In this embodiment, point of interest p' in augmented image x' 404-B may be selected as any other (e.g., arbitrary) point that does not correspond to point of interest p in image x 404-A and the locations of the one or more anatomical landmarks in augmented image y' 406-B may be selected as any other (e.g., arbitrary) point that does not correspond to the anatomical landmarks in image y 406-A.” (see specification [0041]),
“[I]n one embodiment, where a segmentation mask of, e.g., the lesions or the anatomical landmarks, is available, positive sampling 408 and negative sampling 418 of pixels can be performed from the segmented region based on a distance map.” (see specification [0042]) , and
“[I]n one embodiment, the extraction network is trained in a self-supervised approach without annotated training data. In this embodiment, one or more points of interest p are randomly selected from an overlapping area between image 404-A and augmented image x' 404-B.” (see specification [0043]).
The specification does not provide an explicit definition or further detail for the term “selecting” beyond describing that selecting points of interest being equivalent to sampling points. The plain meaning for the term “selecting” includes choosing an element from a larger set of elements.
Thus, the broadest reasonable interpretation, in light of the specification, for the claim term of “an annotated point selected” includes sampled or chosen annotated points from a larger set of points.
Yan is relied upon to teach that for the learning process for the self supervised anatomical embedding (SAM) method, overlapping patch pairs with pixel pairs corresponding to a same position, e.g. body part, are used in the training batch, and positive and negative samples are selected from the image patches (see Yan [0029] and [0032]-[0033]).
Cai is relied upon to teach a technique for training a lesion segmentation method using a convolutional neural network (CNN) model, where 2D training data for the segmentation model are created by generating slice image and label pairs, and the training labels include segmentation masks generated from initial lesion segmentations for the CNN model on the slice images (see Cai Fig. 1, sect. 2. Method, and sect. 2.3. Weakly Supervised Slice-Propagated Segmentation).
One of ordinary skill in the art would have found it obvious from the combined teachings of Yan and Cai to apply the teachings of Cai to the teachings of Yan, such that segmentation mask images of lesions are included in the training dataset for training the neural network based SAM method for detecting anatomical location in a query image, in which positive and negative samples are selected from the image patches according to the included segmentation mask training labels.
As the combined teachings of the cited prior art, notably Yan and Cai, suggests to train the neural network based SAM method by including segmentation masks of lesions in the training dataset as training labels and that positive and negative samples are selected from the image patches according to the included segmentation mask training labels, the cited prior art teachings provide for the broadest reasonable interpretation of “a second image of each of the one or more pairs comprising an annotated point selected based on a segmentation mask of the anatomical object”.
Applicant’s remaining arguments with respect to claims 1, 10, and 15 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “means for receiving…”, “means for extracting, from the first input medical image, a first set of embeddings …”, “means for extracting, from the second input medical image, a second set of embeddings …”, “means for determining …”, and “means for outputting …” in claims 10 and 12-14.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 3-10, 12-15, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al. (US 2022/0180126), herein Yan, in view of Liu et al. (US 2021/0248761), herein Liu, Cai et al. (“Accurate Weakly Supervised Deep Lesion Segmentation on CT Scans: Self-Paced 3D Mask Generation from RECIST”), herein Cai, and Bonakdar Sakhi et al. (US 2022/0138932), herein Bonakdar Sakhi.
Regarding claim 1, Yan discloses a computer implemented method comprising:
receiving a first input medical image and a second input medical image each depicting an anatomical object of a patient, the first input medical image comprising a point of interest corresponding to a location of the anatomical object (see Yan [0037], where in the inference process of a self-supervised anatomical embedding (SAM) method, a template image with a labeled point of interest and an unlabeled query image are used as inputs; and see Yan [0052] where the SAM method is used on performing lesion matching between a template scan and follow-up scans);
extracting, from the first input medical image, a first set of embeddings associated with a plurality of scales by 1) extracting, from the first input medical image, regions each having a scale according to a respective one of the plurality of scales and 2) extracting embeddings from each of the regions extracted from the first input medical image using a machine learning based extraction network, the plurality of scales comprising a coarse scale, and a fine scale (see Yan Fig. 1 and [0030]-[0031], where a coarse-to-fine neural network architecture based on a ResNet and feature pyramid network is trained to predict global and local embeddings, where multiple scaled features are extracted from the input image when processed by the ResNet and feature pyramid network where the coarsest FPN level corresponds to the global embedding tensor and the finest FPN level corresponds with the local embedding tensor; see Yan Fig. 2 and [0037], where global and local embedding tensors are computed for the template image);
extracting, from the second input medical image, a second set of embeddings associated with the plurality of scales by 1) extracting, from the second input medical image, regions each having a scale according to a respective one of the plurality of scales and 2) extracting embeddings from each of the regions extracted from the second input medical image using the machine learning based extraction network (see Yan Fig. 1 and [0030]-[0031], where a coarse-to-fine neural network architecture based on a ResNet and FPN is trained to predict global and local embeddings, where multiple scaled features are extracted from the input image when processed by the ResNet and feature pyramid network where the coarsest FPN level corresponds to the global embedding tensor and the finest FPN level corresponds with the local embedding tensor; see Yan Fig. 2 and [0037], where global and local embedding tensors are computed for the query image);
determining a location of the anatomical object in the second input medical image by comparing embeddings of the first set of embeddings corresponding to the point of interest with embeddings of the second set of embeddings (see Yan Fig. 2 and [0037], where global and local similarity maps are computed between anchor embedding tensors from the point of interest of the template image and the query embedding tensors, and used to detect the anatomical location); and
outputting the location of the anatomical object in the second input medical image (see Yan Fig. 2 and [0037], where the peak of the global and local similarity maps is determined as the detected anatomical location on the query image),
wherein the machine learning based extraction network is trained using one or more pairs of training images, a first image of each of the one or more pairs comprising a point of interest corresponding to a location of the anatomical object (see Yan [0029] and [0032]-[0033], where in the learning process of the SAM method, unlabeled CT volumes are randomly sampled and two 3D patches with random locations and sizes are cropped and resized and when the patches overlap, pixel pairs are determined from the two patches as corresponding to the same position (e.g. body part)).
While Yan teaches that a coarse-to-fine neural network architecture is trained to predict global and local embeddings, where the ResNet and FPN used in the network architecture are adopted to fuse multi-scale features (see Yan Fig. 1 and [0030]-[0031]); Yan does not explicitly disclose that the plurality of scales comprising one or more intermediate scales.
Liu teaches in a related and pertinent image segmentation method using a convolutional neural network with multi-scale context aggregation (see Liu Abstract), where feature maps of multiple scales, ranging from global to local scale contexts with intermediate scale contexts, are computed and used to compute an aggregated-context feature map, with the advantages of improving segmentation accuracy, reducing semantic gaps among contexts with different scales for smooth predictions, and that local features are progressive integrated in a residual refinement manner, helping end-to-end training (see Liu Fig. 3 and [0054]-[0057]).
At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Liu to the teachings of Yan, such that intermediate scale embedding tensors are also extracted and used to compute similarity maps to detect the anatomical location of the point of interest in the query image.
This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results.
In this instance, Yan disclose a base method for using a disclosed SAM method on performing lesion matching between a template scan image and a follow-up query scan image, which uses a coarse-to-fine neural network architecture to extract global and local embedding tensor maps from the images to calculate similarity maps and detect the anatomical location in the query image.
Liu teaches a known technique for image segmentation method using a convolutional neural network, where feature maps of multiple scales, ranging from global to local scale contexts with intermediate scale contexts, are computed and used to compute an aggregated-context feature map, with the advantages of improving segmentation accuracy, reducing semantic gaps among contexts with different scales for smooth predictions, and that local features are progressive integrated in a residual refinement manner, helping end-to-end training.
One of ordinary skill in the art would have recognized that by applying Liu’s teachings would allow for the method of Yan to also extract intermediate scale embedding tensors from the template and follow-up query image to compute corresponding similarity maps to detect the anatomical location of the point of interest in the query image, and predictably leading to an improved lesion matching using the SAM method with improved segmentation accuracy and smoother predictions.
While Yan teaches that in the learning process, overlapping patch pairs with pixel pairs corresponding to a same position (e.g. body part) are used in the training batch and that positive and negative samples are selected from the image patches (see Yan [0029] and [0032]-[0033]); Yan and Liu do not explicitly disclose wherein a second image of each of the one or more pairs comprising an annotated point selected based on a segmentation mask of the anatomical object.
Cai teaches in a related and pertinent convolutional neural network based weakly-supervised segmentation method for volumetric lesions (see Cai Abstract), where 2D training data for the segmentation model are created by generating slice image and label pairs (see Cai sect. 2. Method), where the training labels are segmentation masks generated from initial lesion segmentations for the CNN model on the slice images (see Cai Fig. 1 and sect. 2.3 Weakly Supervised Slice-Propagated Segmentation).
At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Cai to the teachings of Yan and Liu, such that segmentation mask images of lesions are included in the training dataset for training the neural network based SAM method for detecting anatomical location in a query image, where the positive and negative samples are selected from the image patches according to the included segmentation mask training labels.
This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results.
In this instance, Yan and Liu disclose a base method for using a disclosed SAM method on performing lesion matching between a template scan image and a follow-up query scan image, which uses a coarse-to-fine neural network architecture to extract global and local embedding tensor maps from the images to calculate similarity maps and detect the anatomical location in the query image, and that in the learning process, overlapping patch pairs with pixel pairs corresponding to a same position (e.g. body part) are used in the training batch, and positive and negative samples are selected from the image patches.
Cai teaches a known technique for training a lesion segmentation method based on a convolutional neural network architecture, where the training labels include segmentation masks generated from initial lesion segmentations for the CNN model on the slice images.
One of ordinary skill in the art would have recognized that by applying Cai’s teachings would allow for the method of Yan and Liu to further include segmentation mask images of lesions in the training dataset for training the neural network based SAM method and that positive and negative samples are selected from the image patches according to the included segmentation mask training labels, and predictably leading to an improved lesion matching using the SAM method with improved segmentation accuracy.
While Yan teaches that for the training process of the SAM method, global and local similarity maps are computed for each positive pairs of global and local embeddings of sampled positive pairs (see Yan [0046]-[0050]), and Cai teaches that initial lesion segmentations based on image foreground and background seed points are used as training labels for training a CNN model to perform lesion segmentation (see Cai sect. 2.1 Initial RECIST-Slice Segmentation and sect. 2.3 Weakly Supervised Slice-Propagated Segmentation); Yan, Liu, and Cai do not explicitly disclose that the annotated point is selected based on a distance to a centroid of the anatomical object in the segmentation mask.
Bonakdar Sakhi teaches in a related and pertinent lesion detection method based on a trained machine learning model (see Bonakdar Sakhi Abstract), where lesion detection maps identifying portions of medical imaging data elements corresponding to detected lesions are input to a lesion segmentation process based on a watershed algorithm which partitions the lesion shape masks into regions according to determined seed points of the mask (see Bonakdar Sakhi [0101] and [0149]-[0152]), where lesion splitting and relabeling performs a distance transform to generate distance map, where the distance transform computes the shortest distance for each point in the lesion mask to the mask contour, identifying the lesion masks’ center points being the points with the highest distance as seed point, generating watershed split lesion masks, and the distance map is used to relabel seeds based on pairwise affinity and merges regions corresponding to the seeds with the same label to overcome over-splitting in the split segmentation mask (see Bonakdar Sakhi [0168]-[0177]).
At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Bonakdar Sakhi to the teachings of Yan, Liu, and Cai, such that the lesions segmentation masks used as training labels are further refined by performing the lesion mask splitting and relabeling process based on the computed distance transform, which identifies center points with the highest distance to the mask contour as seed points, as taught by Bonakdar Sakhi.
This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results.
In this instance, Yan, Liu, and Cai disclose a base method for using a disclosed SAM method on performing lesion matching between a template scan image and a follow-up query scan image, which uses a coarse-to-fine neural network architecture to extract global and local embedding tensor maps from the images to calculate similarity maps and detect the anatomical location in the query image, and that in the learning process, overlapping patch pairs with pixel pairs corresponding to a same position (e.g. body part) are used in the training batch.
Bonakdar Sakhi teaches a known technique for a lesion segmentation process based on a watershed algorithm which partitions the lesion shape masks into regions according to determined seed points of the mask, where lesion splitting and relabeling performs a distance transform to generate distance map which computes the shortest distance for each point in the lesion mask to the mask contour, identifying the lesion masks’ center points being the points with the highest distance as seed point, generating split lesion masks, and that the distance map is used to relabel seeds based on pairwise affinity and merges regions corresponding to the seeds with the same label to overcome over-splitting in the segmentation mask.
One of ordinary skill in the art would have recognized that by applying Bonakdar Sakhi’s teachings would allow for the method of Yan, Liu, and Cai to
further refine the lesions segmentation masks used as training labels by performing the lesion mask splitting and relabeling process based on the computed distance transform, identifying center points with the highest distance to the mask contour as seed points, and relabeling and merging corresponding seed point pairs and regions to avoid over-splitting in the lesion segmentation masks,
and predictably leading to an improved lesion segmentation masks used for training in the suggested SAM method.
Regarding claim 3, please see the above rejection of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 1, wherein the machine learning based extraction network is trained with multi-task learning to perform one or more auxiliary tasks (see Yan [0065], where the SAM method may be used to improve other medical image analysis tasks such as registration, lesion detection, and retrieval).
Regarding claim 4, please see the above rejection of claim 3. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 3, wherein the one or more auxiliary tasks comprise at least one of landmark detection, segmentation, pixel-wise matching, or image reconstruction (see Yan [0065], where the SAM method may be used to improve other medical image analysis tasks such as registration, lesion detection, and retrieval).
Regarding claim 5, please see the above rejection of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 1, wherein the machine learning based extraction network is trained with unlabeled and unpaired training images (see Yan [0032], where unlabeled CT volume are used in the learning process of the SAM method; and see Yan [0036], where the universal SAM method may learn from unlabeled radiological images to detect arbitrary points of interest; see Yan [0046], where a plurality of randomly selected images from unlabeled images are used in training).
Regarding claim 6, please see the above rejection of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 1, wherein determining a location of the anatomical object in the second input medical image by comparing embeddings of the first set of embeddings corresponding to the point of interest with embeddings of the second set of embeddings comprises:
identifying matching embeddings of the second set of embeddings that are most similar to embeddings of the first set of embeddings corresponding to the point of interest (see Yan Fig. 2 and [0037], where global and local similarity maps are computed between anchor embedding tensors from the point of interest of the template image and the query embedding tensors, and used to detect the anatomical location); and
determining the location of the anatomical object in the second input medical image as a pixel or voxel that corresponds to the matching embeddings (see Yan Fig. 2 and [0037], where the peak of the global and local similarity maps is determined as the detected anatomical location on the query image; see Yan [0030] and [0065], where the coarse-to-fine neural network architecture of the SAM method determines pixel-wise feature embeddings).
Regarding claim 7, please see the above rejection of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 1, wherein determining a location of the anatomical object in the second input medical image by comparing embeddings of the first set of embeddings corresponding to the point of interest with embeddings of the second set of embeddings comprises:
comparing embeddings of the first set of embeddings and the second set of embeddings with corresponding scales of the plurality of scales (see Yan Fig. 2 and [0037], where global and local similarity maps are computed between anchor embedding tensors from the point of interest of the template image and the query embedding tensors).
Regarding claim 8, please see the above rejection of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 1, further comprising:
repeating the receiving, the extracting the first set of embeddings, the extracting the second set of embeddings, the determining, and the outputting using an additional input medical image as the second input medical image (see Yan [0043], where a database of images are used to test the lesion matching, suggesting repeating the method with additional medical images being input as a follow up query images).
Regarding claim 9, please see the above rejection of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi disclose the computer implemented method of claim 1, wherein the first input medical image and the second input medical image respectively comprise a baseline medical image and a follow-up medical image of a longitudinal imaging study of the patient (see Yan [0052], where the SAM method is used on performing lesion matching between a template scan and follow-up scans; and see Yan [0064], where an application may be lesion matching, which is an important clinical task for radiologists to longitudinally track disease progress).
Regarding claim 10, it recites an apparatus comprising means for performing the method of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi teach an apparatus with means for performing the method of claim 1 (see Yan [0067], where a device including a processor coupled with a memory storing a computer program to execute the disclosed teachings). Please see above for detailed claim analysis.
Please see the above rejection for claim 1, as the rationale to combine the teachings of Yan, Liu, Cai, and Bonakdar Sakhi are similar, mutatis mutandis.
Regarding claim 12, see above rejection for claim 10. It is an apparatus claim reciting similar subject matter as claim 3. Please see above claim 3 for detailed claim analysis as the limitations of claim 12 are similarly rejected.
Regarding claim 13, see above rejection for claim 12. It is an apparatus claim reciting similar subject matter as claim 4. Please see above claim 4 for detailed claim analysis as the limitations of claim 13 are similarly rejected.
Regarding claim 14, see above rejection for claim 10. It is an apparatus claim reciting similar subject matter as claim 5. Please see above claim 5 for detailed claim analysis as the limitations of claim 14 are similarly rejected.
Regarding claim 15, it recites a non-transitory computer readable medium storing computer program instructions, the computer program instructions when executed by a processor cause the processor to perform the method of claim 1. Yan, Liu, Cai, and Bonakdar Sakhi teach a non-transitory computer readable medium storing computer program instructions, the computer program instructions when executed by a processor cause the processor performing the method of claim 1 (see Yan [0068], where a non-transitory computer readable storage medium and program instructions stored therein, the program instructions being configured to be executable by a computer to cause the computer to implement the disclosed teachings). Please see above for detailed claim analysis.
Please see the above rejection for claim 1, as the rationale to combine the teachings of Yan, Liu, Cai, and Bonakdar Sakhi are similar, mutatis mutandis.
Regarding claim 17, see above rejection for claim 15. It is a non-transitory computer readable medium claim reciting similar subject matter as claim 6. Please see above claim 6 for detailed claim analysis as the limitations of claim 17 are similarly rejected.
Regarding claim 18, see above rejection for claim 15. It is a non-transitory computer readable medium claim reciting similar subject matter as claim 7. Please see above claim 7 for detailed claim analysis as the limitations of claim 18 are similarly rejected.
Regarding claim 19, see above rejection for claim 15. It is a non-transitory computer readable medium claim reciting similar subject matter as claim 8. Please see above claim 8 for detailed claim analysis as the limitations of claim 19 are similarly rejected.
Regarding claim 20, see above rejection for claim 15. It is a non-transitory computer readable medium claim reciting similar subject matter as claim 9. Please see above claim 9 for detailed claim analysis as the limitations of claim 20 are similarly rejected.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY WING HO CHOI whose telephone number is (571)270-3814. The examiner can normally be reached 9:00 AM to 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VINCENT RUDOLPH can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TIMOTHY CHOI/Examiner, Art Unit 2671
/VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671