Prosecution Insights
Last updated: April 19, 2026
Application No. 18/024,179

METHODS FOR IDENTIFYING CROSS-MODAL FEATURES FROM SPATIALLY RESOLVED DATA SETS

Non-Final OA §101§102§103§DP
Filed
Mar 01, 2023
Examiner
TRAN, DUY ANH
Art Unit
2674
Tech Center
2600 — Communications
Assignee
The General Hospital Corporation
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
104 granted / 128 resolved
+19.3% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
29 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
26.7%
-13.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§101 §102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of claims 1-10, 18, 20-25, 27-28, 30-32 and 40-47 in the reply filed on 09/02/2025 is acknowledged. Information Disclosure Statement The information disclosure statement (IDS) submitted on 09/02/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Status Claim 47 is objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claims. Claims 1-10, 18, 20-25, 27-28, 30-32 and 40-47 is/are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-62 of copending Application No. 18/688,518 (U.S. 20250124570 A1). Claims 1-10, 18, 20-25, 27-28, 30-32 and 40-47are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim(s) 1-10, 18, 20-24, 28, 30-32 and 40-44 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Abdelmoula, Walid M., et al.(“Automatic 3D nonlinear registration of mass spectrometry imaging and magnetic resonance imaging data.”; Abdelmoula). Claim(s) 25 and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abdelmoula, Walid M., et al.(“Automatic 3D nonlinear registration of mass spectrometry imaging and magnetic resonance imaging data.”; Abdelmoula). in view of Hayasaka, Satoru, et al. (“A non-parametric approach for co-analysis of multi-modal brain imaging data: application to Alzheimer's disease.”; Hayasaka). Claim(s) 45-47 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abdelmoula, Walid M., et al.(“Automatic 3D nonlinear registration of mass spectrometry imaging and magnetic resonance imaging data.”; Abdelmoula), in view of Sturm (U.S. 20140254900 A1). Claim Objections Claim 47 is objected to under 37 CFR 1.75(c) as being in improper form because a multiple dependent claims 1 and 44. See MPEP § 608.01(n). Accordingly, the claim 47 not been further treated on the merits. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. The USPTO may not institute a derivation proceeding in the absence of a timely filed petition. The U.S. Patent and Trademark Office normally will not institute a derivation proceeding between applications or a patent and an application having common ownership (see 37 CFR 42.411). The applicant should amend or cancel claims such that the reference and the instant application no longer contain claims directed to the same invention. Claims 1-10, 18, 20-25, 27-28, 30-32 and 40-47 is/are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-62 of copending Application No. 18/688,518 (U.S. 20250124570 A1). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the present application are obvious variants of the patented claims. For claim 1 although this claim is not identical to Claims 1 and 16 of Application 18/688,518, this claim is not patentably distinct from Claims 1 and 16 of Application 18/688,518 because Claim 1 is broader than and fully encompassed by claims 1 and 20 Application 18/688,518. Examiner noted independent claim 1 for the instant application, shares in common with the claim(s) of reference: Application 18/024,179 (U.S. 20230306761 A1) Application 18/688,518 (U.S. 20250124570 A1) Claim 1: A method of identifying a cross-modal feature from two or more spatially resolved data sets, the method comprising: (a) registering the two or more spatially resolved data sets to produce an aligned feature image comprising the spatially aligned two or more spatially resolved data sets; and (b) extracting the cross-modal feature from the aligned feature image. Claim 16: A method of identifying a cross-modal feature from two or more spatially resolved data sets, the method comprising: (a) registering the two or more spatially resolved data sets to produce an aligned feature image comprising the spatially aligned two or more spatially resolved data sets; and (b) extracting the cross-modal feature from the aligned feature image. Claim 1: A method of generating a diagnostic, prognostic, or theranostic for a disease state from three or more imaging modalities obtained from a biopsy sample from a subject, the method comprising comparing a plurality of cross-modal features to identify a correlation between at least one cross-modal feature parameter and the disease state to identify the diagnostic, prognostic, or theranostic, wherein the plurality of cross-modal features is identified by steps comprising: (a) registering the three or more spatially resolved data sets to produce an aligned feature image comprising the spatially aligned three or more spatially resolved data sets; and (b) extracting the cross-modal feature from the aligned feature image; wherein each cross-modal feature comprises a cross-modal feature parameter, and wherein the three or more spatially resolved data sets are outputs by the corresponding imaging modality selected from the group consisting of the three or more imaging This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-10, 18, 20-25, 27-28, 30-32 and 40-47are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claim 1, and based upon consideration of all of the relevant factors with respect to the claim as a whole, claim(s) 1-10, 18, 20-25, 27-28, 30-32 and 40-47 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is/are therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1. The rationale, under MPEP § 2106, for this finding is explained below: The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria. Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter? When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims is related to a process since the claim is directed to a method. Step 2a, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception? The Examiner interprets that the judicial exception applies since Claim 1 limitation of (a) registering the two or more spatially resolved data sets to produce an aligned feature image comprising the spatially aligned two or more spatially resolved data sets; and (b) extracting the cross-modal feature from the aligned feature image are directed to an abstract idea. The claim 1 is related to mental process by a claim to “identifying a cross-modal feature from two or more spatially resolved data sets,” where extracting the cross-modal feature from the aligned feature image steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016) and/or performing a mental process in a computer environment. An example of a case identifying a mental process performed in a computer environment as an abstract idea is Symantec Corp., 838 F.3d at 1316-18, 120 USPQ2d at 1360 ; If the claim recites a judicial exception (i.e., an abstract idea enumerated in MPEP § 2106.04(a)(2), a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two. Step 2a, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The Examiner interprets that Claim 1 limitation do/does not provide additional elements or combination of additional elements to a practical application since the claim are performed by a tool generation unit which are generic computer components, see MPEP 2106.05(g). or Generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h). See, MPEP §2106.04(a), Because a judicial exception is not eligible subject matter, Bilski, 561 U.S. at 601, 95 USPQ2d at 1005-06 (quoting Chakrabarty, 447 U.S. at 309, 206 USPQ at 197 (1980)), if there are no additional claim elements besides the judicial exception, or if the additional claim elements merely recite another judicial exception, that is insufficient to integrate the judicial exception into a practical application. See, e.g., RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1327, 122 USPQ2d 1377 (Fed. Cir. 2017) ("Adding one abstract idea (math) to another abstract idea (encoding and decoding) does not render the claim non-abstract"). If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection (see MPEP § 2106.07 for more information on formulating a rejection for lack of eligibility), it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim. Step 2b: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception. The Examiner interprets that the Claims do not amount to significantly more since the Claim(s) is/state registering the two or more spatially resolved data sets to produce an aligned feature image comprising the spatially aligned two or more spatially resolved data sets which are adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)). Furthermore, the generic computer components recited as performing generic computer functions that are well-understood, routine and conventional activities amount to no more than implementing the abstract idea with a computerized system. Claims 2-10, 18, 20-25, 27-28, 30-32 and 40-47 are depending on the independent claim/s 1 include all the limitation of the independent claim 1. The Examiner finds that Claim(s) 2-10, 18, 20-25, 27-28, 30-32 and 40-47 do/does not state significantly more since the claim only recites obtain pocket measurements correspond to with a plurality of teeth. Thus, Claims 2-10, 18, 20-25, 27-28, 30-32 and 40-47 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more. Therefore, the Examiner interprets that the claims are rejected under 35 U.S.C. 101. Claim(s) 45-47 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a computer readable storage medium or a memory that does not exclude transitory forms of signal transmission (often referred to as "signals per se"), such as a propagating electrical or electromagnetic signal or carrier wave, and therefore does not fall within at least one of the four categories (a process, machine, manufacture, or composition of matter). It is suggested that amending the claim language to define the computer readable medium as "a non-transitory computer-readable medium" to satisfy the requirements and limit the claimed invention to eligible subject matter. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-10, 18, 20-24, 28, 30-32 and 40-44 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Abdelmoula, Walid M., et al.(“Automatic 3D nonlinear registration of mass spectrometry imaging and magnetic resonance imaging data.”; Abdelmoula). Regarding claim 1, Abdelmoula discloses a method of identifying a cross-modal feature from two or more spatially resolved data sets, (Abstract: our method using multimodal high-spectral resolution matrix-assisted laser desorption ionization (MALDI) 9.4 T MSI and 7 T in vivo MRI data … Results show the distribution of some identified molecular ions of the EGFR inhibitor erlotinib, a phosphatidylcholine lipid, and cholesterol, which were reconstructed in 3D and mapped to the MRI space.”) the method comprising: (a) registering the two or more spatially resolved data sets to produce an aligned feature image comprising the spatially aligned two or more spatially resolved data sets; (Fig.1 and 3D MSI-MRI Nonlinear Image Registration: “Image registration is typically performed between two images, namely, fixed image If and moving image Im. The moving image, in this case a t-SNE image, is warped to be spatially aligned with the fixed image, the MR image. The proposed registration scheme is presented in Figure 1. … the computed transformation parameters were applied to each m/z image in the corresponding MSI datacube to spatially align it with the associated MR image.”; Spatially Mapped t-SNE for MSI-MRI Integration: “The 3D MSI/MRI alignment problem was approximated and implemented using a series of sequential 2D alignment in slice-to-slice fashion between the t-SNE and MR images, as illustrated in Figure 1. Image registration computes a transformation matrix to nonlinearly warp the moving image which becomes spatially aligned with the corresponding MR slice”) and (b) extracting the cross-modal feature from the aligned feature image. (Figs. 1-5; 3D MSI Data Segmentation Using HSNE; Integrated Molecular and Anatomical Phenotypes in a Normal Brain.: “This MSI-MRI fusion also confirms accurate colocalization of these two ion signatures with the MR anatomical regions of striatum, corpus callosum, and cortex”; Spatially Mapped t-SNE for MSI-MRI Integration: “an automatic method for direct multimodal nonlinear alignment of 3D MSI and MRI data. The proposed concept, as illustrated in Figure 1, is based on simplifying the dimensional complexity of the MSI data to extract structural features common to the two modalities and thus can establish spatial correspondence,” Regarding claim 2, Abdelmoula discloses step (a) comprises dimensionality reduction for each of the two or more data sets. (Spatially Mapped t-SNE for MSI-MRI Integration.: “Spatially Mapped t-SNE for MSI-MRI Integration.: “. The dimensionality reduction method of t-SNE was used to compute a nonlinear mapping of the high dimensional mass spectra into a lower dimensional representation, in 3D space.” and Magnetic Resonance Imaging.) Regarding claim 3, Abdelmoula discloses the dimensionality reduction is performed by uniform manifold approximation and projection (UMAP), isometric mapping (Isomap), t-distributed stochastic neighbor embedding (t-SNE), potential of heat diffusion for affinity-based transition embedding (PHATE), principal component analysis (PCA), diffusion maps, or non-negative matrix factorization (NMF). (Spatially Mapped t-SNE for MSI-MRI Integration.: “Spatially Mapped t-SNE for MSI-MRI Integration.: “. The dimensionality reduction method of t-SNE was used to compute a nonlinear mapping of the high dimensional mass spectra into a lower dimensional representation, in 3D space.”) Regarding claim 4, Abdelmoula discloses the dimensionality reduction is performed by uniform manifold approximation and projection (UMAP). (t-SNE Maps of 3D MSI Data. : “This means similar data points that are located near each other in the high dimensional space will be projected close to each other in a low dimensional representation, whereas dissimilar high dimensional data points will be projected far apart.”) Regarding claim 5, Abdelmoula discloses step (a) comprises optimizing global spatial alignment in the aligned feature image. (3D MSI-MRI Nonlinear Image Registration.: “The registration process was initialized using affine transformation to compute the global deformation parameters (translations, rotation, scaling, and shearing) and followed by increasing the deformations degree-of-freedom to model the local deformations using the cubic BSpline transform.”; Spatially Mapped t-SNE for MSI-MRI Integration: “The alignment quality has significantly been improved after modeling the nonlinear deformations using the cubic BSpline transformation”) Regarding claim 6, Abdelmoula discloses step (a) comprises optimizing local alignment in the aligned feature image. (3D MSI-MRI Nonlinear Image Registration.: “The registration process was initialized using affine transformation to compute the global deformation parameters (translations, rotation, scaling, and shearing) and followed by increasing the deformations degree-of-freedom to model the local deformations using the cubic BSpline transform.”; Spatially Mapped t-SNE for MSI-MRI Integration: “The alignment quality has significantly been improved after modeling the nonlinear deformations using the cubic BSpline transformation”) Regarding claim 7, Abdelmoula discloses the method further comprises clustering the two or more spatially resolved data sets to supplement the data sets with an affinity matrix representing inter-data point similarity. (3D MSI Data Segmentation Using HSNE: “ The hierarchical nature of HSNE means that the original high dimensional data points are represented, across different scales, by parental points called landmarks. … The selected HSNE cluster, Lcs, assigns aprobability for each of the high dimensional data points based on their likelihood of being represented within that cluster.” ; Fig.5 and The aligned 3D MALDI MSI datacube of the GBM39 brain model was analyzed using HSNE and the identified molecular structures are shown in Figure 5. … The 3D reconstruction of the segmented tumor and normal regions are shown in Figure 5b as red and green clusters, respectively.) Regarding claim 8, Abdelmoula discloses the clustering step comprises extracting a high dimensional graph from the aligned feature image. (Fig.5 and The aligned 3D MALDI MSI datacube of the GBM39 brain model was analyzed using HSNE and the identified molecular structures are shown in Figure 5. … The HSNE algorithm automatically constructed 3 embedding levels based on the 3D MSI data distribution in the high dimensional space. … The 3D reconstruction of the segmented tumor and normal regions are shown in Figure 5b as red and green clusters, respectively.) Regarding claim 9, Abdelmoula discloses clustering is performed according to Leiden algorithm, Louvain algorithm, random walk graph partitioning, spectral clustering, or affinity propagation. (Fig.5 and The aligned 3D MALDI MSI datacube of the GBM39 brain model was analyzed using HSNE and the identified molecular structures are shown in Figure 5. … The HSNE algorithm automatically constructed 3 embedding levels based on the 3D MSI data distribution in the high dimensional space. … The 3D reconstruction of the segmented tumor and normal regions are shown in Figure 5b as red and green clusters, respectively.) Regarding claim 10, Abdelmoula discloses the method comprises prediction of cluster-assignment to unseen data, wherein the method further comprises one or more of:(a) modelling cluster-cluster spatial interactions;(b) an intensity-based analysis;(c) an analysis of an abundance of cell types or a heterogeneity of predetermined regions in the data;(d) an analysis of spatial interactions between objects;(e) an analysis of type-specific neighborhood interactions;(f) an analysis of high-order spatial interactions; or (g) an analysis of prediction of spatial niches. (3D MSI Data Segmentation Using HSNE: “ The hierarchical nature of HSNE means that the original high dimensional data points are represented, across different scales, by parental points called landmarks. … The selected HSNE cluster, Lcs, assigns aprobability for each of the high dimensional data points based on their likelihood of being represented within that cluster.” ; Fig.5 and The aligned 3D MALDI MSI datacube of the GBM39 brain model was analyzed using HSNE and the identified molecular structures are shown in Figure 5. … The 3D reconstruction of the segmented tumor and normal regions are shown in Figure 5b as red and green clusters, respectively.) Regarding claim 18, Abdelmoula discloses the method further comprises classifying the data. (Fig.4: and 3D MSI Data Segmentation Using HSNE: “ The selected HSNE cluster, Lcs, assigns a probability for each of the high dimensional data points based on their likelihood of being represented within that cluster. … The Pearson correlation coefficient was calculated between an HSNE cluster and MSI datacube to find colocalized m/z ion features within that HSNE cluster.”; Discussion: “the T1 postcontrast MRI tumor region and the HSNE spatially mapped cluster of the molecularly based segmented tumor. The Dice coefficient is a similarity metric that measures the overlap between two segmented images;”) Regarding claim 20, Abdelmoula discloses the classifying process is performed by a hard classifier, soft classifier, or fuzzy classifier. (Fig.4: and 3D MSI Data Segmentation Using HSNE: ; Discussion: “adopted both quantitative and qualitative assessment approaches and our results were visually examined by experts in MSI. The current quantitative assessment approaches would generally rely on measuring the Euclidean distances between common landmarks or measuring the overlap between segmented structures in the two imaging modalities. … the T1 postcontrast MRI tumor region and the HSNE spatially mapped cluster of the molecularly based segmented tumor. The Dice coefficient is a similarity metric that measures the overlap between two segmented images;”) Regarding claim 21, Abdelmoula discloses the method further comprises defining one or more spatially resolved objects in the aligned feature image. (Discussion: “adopted both quantitative and qualitative assessment approaches and our results were visually examined by experts in MSI. The current quantitative assessment approaches would generally rely on measuring the Euclidean distances between common landmarks or measuring the overlap between segmented structures in the two imaging modalities. … the T1 postcontrast MRI tumor region and the HSNE spatially mapped cluster of the molecularly based segmented tumor. The Dice coefficient is a similarity metric that measures the overlap between two segmented images;” Fig.4: and 3D MSI Data Segmentation Using HSNE) Regarding claim 22, Abdelmoula discloses the method further comprises analyzing spatially resolved objects. (Fig.4: and 3D MSI Data Segmentation Using HSNE; Discussion: “adopted both quantitative and qualitative assessment approaches and our results were visually examined by experts in MSI. The current quantitative assessment approaches would generally rely on measuring the Euclidean distances between common landmarks or measuring the overlap between segmented structures in the two imaging modalities. … the T1 postcontrast MRI tumor region and the HSNE spatially mapped cluster of the molecularly based segmented tumor. The Dice coefficient is a similarity metric that measures the overlap between two segmented images;”) Regarding claim 23, Abdelmoula discloses the analyzing spatially resolved objects comprises segmentation. (Discussion: “adopted both quantitative and qualitative assessment approaches and our results were visually examined by experts in MSI. The current quantitative assessment approaches would generally rely on measuring the Euclidean distances between common landmarks or measuring the overlap between segmented structures in the two imaging modalities. … the T1 postcontrast MRI tumor region and the HSNE spatially mapped cluster of the molecularly based segmented tumor. The Dice coefficient is a similarity metric that measures the overlap between two segmented images;”) Regarding claim 24, Abdelmoula discloses the method further comprises inputting one or more landmarks into the aligned feature image. (Fig.4; Discussion: “adopted both quantitative and qualitative assessment approaches and our results were visually examined by experts in MSI. The current quantitative assessment approaches would generally rely on measuring the Euclidean distances between common landmarks or measuring the overlap between segmented structures in the two imaging modalities. … the T1 postcontrast MRI tumor region and the HSNE spatially mapped cluster of the molecularly based segmented tumor. The Dice coefficient is a similarity metric that measures the overlap between two segmented images;”) Regarding claim 28, Abdelmoula discloses wherein step (b) comprises multi-domain translation or a predictive output based on the cross-modal feature. (Concluding Remark: “Integration of such multimodal imaging data would bridge the gaps between anatomical and biomolecular phenotypes for better understanding of various biological problems and paving the way for establishing predictive models between those modalities.”) Regarding claim 30, Abdelmoula discloses the multi-domain translation is performed by generative adversarial network or adversarial autoencoder. (t-SNE Maps of 3D MSI Data.: “The registration process was initialized using affine transformation to compute the global deformation parameters (translations, rotation, scaling, and shearing) and followed by increasing the deformations degree-of-freedom to model the local deformations using the cubic BSpline transform.”; the person skill in the art would obvious design choice that the B-splines transform model that can be combine/replace with adversarial autoencoder .) Regarding claim 31, Abdelmoula discloses at least one of the two or more spatially resolved data sets is an image from immunohistochemistry, imaging mass cytometry, multiplexed ion beam imaging, mass spectrometry imaging, cell staining, RNA-ISH, spatial transcriptomics, or codetection by indexing imaging. (MALDI Mass Spectrometry Imaging.: “. Mass spectrometry imaging of the healthy mice brains was performed using a Rapiflex (Bruker Daltonics, Billerica, MA), MALDI-TOF/TOF mass spectrometer”) Regarding claim 32, Abdelmoula discloses at least one of the spatially resolved measurement modalities is;(a) immunofluorescence imaging;(b) imaging mass cytometry;(c) multiplexed ion beam imaging;(d) mass spectrometry imaging that is MALDI imaging, DESI imaging, or SIMS imaging;(e) cell staining that is H&E, toluidine blue, or fluorescence staining;(f) RNA-ISH that is RNAScope;(g) spatial transcriptomics; or (h) codetection by indexing imaging. (MALDI Mass Spectrometry Imaging.: “. Mass spectrometry imaging of the healthy mice brains was performed using a Rapiflex (Bruker Daltonics, Billerica, MA), MALDI-TOF/TOF mass spectrometer”) Regarding claim 40, Abdelmoula discloses A method of identifying a diagnostic, prognostic, or theranostic for a disease state from two or more imaging modalities, the method comprising comparing a plurality of cross-modal features to identify a correlation between at least one cross-modal feature parameter and the disease state to identify the diagnostic, prognostic, or theranostic, wherein the plurality of cross-modal features is identified according to claim 1, wherein each cross-modal feature comprises a cross-modal feature parameter, and wherein the two or more spatially resolved data sets are outputs by the corresponding imaging modality selected from the group consisting of the two or more imaging modalities. (Figs. 4-6 and Tumor Specific Molecule Mapped to MRI.: “Co-localized m/z ion features within each of the HSNE clusters were identified by calculating the Pearson’s correlation between the selected HSNE cluster and the spectral information on the GBM39 model … The distributions of the spectral correlation within each of the HSNE clusters of normal and tumor are shown in Figure S4.”; Figure 5. Hierarchical stochastic neighbor embedding (HSNE) identifies spectral patterns associated with tumor and normal tissue types in the GBM39 mouse brain model.) Regarding claim 41, Abdelmoula discloses the cross-modal feature parameter is a molecular signature, single molecular marker, or abundance of markers. (Figs. 4-6 and entification of 3D Molecular Patterns and Integration with MRI.: “These anatomical-like molecular structures were integrated to constitute a 3D composite image that was rendered (Figure 4b) and fused with the T2-RARE MR image (Figure 4d). These molecular patterns reconcile the MR anatomical structures (Figure 4c), and their overlay on the MR image visually confirms high accuracy of the nonlinear registration at distinct anatomical regions (Figure 4d).”) Regarding claim 42, Abdelmoula discloses the diagnostic, prognostic, or theranostic is individualized to an individual that is the source of the two or more spatially resolved data sets. (Fig.6 and DISCUSSION: “Multimodal integration between MSI and MRI data is a natural and possibly foundational step in paving the way to harvest benefits of interesting complementary information for building more robust models of tumor growth and/or response to treatment.”) Regarding claim 43, Abdelmoula discloses the diagnostic, prognostic, or theranostic is a population-level diagnostic, prognostic, or theranostic. (Fig. 6 and DISCUSSION: “Multimodal integration between MSI and MRI data is a natural and possibly foundational step in paving the way to harvest benefits of interesting complementary information for building more robust models of tumor growth and/or response to treatment.”) Regarding claim 44, Abdelmoula discloses a method of identifying a trend in a parameter of interest within the plurality of aligned feature images identified according to the method of any one of claim 1, the method comprising identifying a parameter of interest in the plurality of aligned feature images and comparing the parameter of interest among the plurality of the aligned feature images to identify the trend. (Figs. 3-4; Integrated Molecular and Anatomical Phenotypes in a Normal Brain. Multimodal integration of 3D MSI and MRI data has enabled fusion of multiscale data at the molecular and organ levels, respectively. Figure 2 shows the distribution of two ion features at m/z 864.5 ± 0.1 and m/z 840.5 ± 0.1 that were nonlinearly deformed and overlaid atop of the T2-RARE MR volumetric image within the region of interest in the normal brain … This MSI-MRI fusion also confirms accurate colocalization of these two ion signatures with the MR anatomical regions of striatum, corpus callosum, and cortex”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 25 and 27 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abdelmoula, Walid M., et al.(“Automatic 3D nonlinear registration of mass spectrometry imaging and magnetic resonance imaging data.”; Abdelmoula). in view of Hayasaka, Satoru, et al. (“A non-parametric approach for co-analysis of multi-modal brain imaging data: application to Alzheimer's disease.”; Hayasaka). Regarding claim 25, Abdelmoula discloses all the claims invention except wherein step (b) comprises permutation testing for enrichment or depletion of cross-modal features or produces a list of p-values and/or identities enriched or depleted factors. Hayasaka discloses step (b) comprises permutation testing for enrichment or depletion of cross-modal features or produces a list of p-values and/or identities enriched or depleted factors. (Permutation test framework: “The permutation test works by generating the distribution of a test statistic based on random re-assignment, or permutation, of data labels … Corrected P values can be assessed by comparing the test statistic from the final permutation, the one with the correct group labels, to this empirical distribution.” Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Abdelmoula by including The permutation test works that is taught by Hayasaka, to make the invention that A non-parametric approach for co-analysis of multi-modal brain imaging data; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving analyses and interpretation of findings in multi-modal imaging studies. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 27, Abdelmoula , as modified by Hayasaka, discloses all the claims invention. Hayasaka further discloses the permutation testing is performed by mean value permutation test. (Statistical analysis: “The perfusion t test was adjusted for age and reference perfusion as covariates in an ANCOVA model. The mean perfusion value of the motor cortex was chosen as reference perfusion for each subject”) Claim(s) 45-47 is/are rejected under 35 U.S.C. 103 as being unpatentable over Abdelmoula, Walid M., et al.(“Automatic 3D nonlinear registration of mass spectrometry imaging and magnetic resonance imaging data.”; Abdelmoula), in view of Sturm (U.S. 20140254900 A1). Regarding claim 45, Abdelmoula discloses all the claims invention except A computer-readable storage medium having stored thereon a computer program for identifying a cross-modal feature from two or more spatially resolved data sets, the computer program comprising a routine set of instructions for causing the computer to perform the steps from the method of claim 1. Sturn discloses A computer-readable storage medium having stored thereon a computer program for identifying a cross-modal feature from two or more spatially resolved data sets, the computer program comprising a routine set of instructions for causing the computer to perform the steps from the method of claim 1. (Paragraph 33: “the algorithm addresses certain factors or parameters in order to make a comprehensive evaluation and identify the feature of interest based on positional and other data accumulated from multiple imaging modalities.”; Paragraph 38: “As one skilled in the art would recognize as necessary or best-suited for performance of the methods of the invention, a computer system or machines of the invention include one or more processors (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), a main memory and a static memory, which communicate with each other via a bus.”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Abdelmoula by including a computer system or machines that is taught by Sturm, to make the invention that detecting features of interest in cardiovascular images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the efficient in a computer environment. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 46, Abdelmoula discloses all the claims invention except a computer-readable storage medium having stored thereon a computer program for identifying a diagnostic, prognostic, or theranostic for a disease state from two or more imaging modalities, the computer program comprising a routine set of instructions for causing the computer to perform the steps from the method of claim1. Sturn discloses a computer-readable storage medium having stored thereon a computer program for identifying a diagnostic, prognostic, or theranostic for a disease state from two or more imaging modalities, the computer program comprising a routine set of instructions for causing the computer to perform the steps from the method of claim1. (Paragraph 38: “As one skilled in the art would recognize as necessary or best-suited for performance of the methods of the invention, a computer system or machines of the invention include one or more processors (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), a main memory and a static memory, which communicate with each other via a bus.”; Paragraphs 49 and 54: “ At a block 304, the search model to be used for identifying a feature of interest is selected, trained and validated. .. which is then used as an evaluation model for evaluating risk of a diabetic condition.”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Abdelmoula by including a computer system or machines that is taught by Sturm, to make the invention that detecting features of interest in cardiovascular images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the efficient in a computer environment. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 47, Abdelmoula discloses all the claims invention except a computer-readable storage medium having stored thereon a computer program for identifying a trend in a parameter of interest within the plurality of aligned feature images identified according to the method of claim 1, the computer program comprising a routine set of instructions for causing the computer to perform the steps from the method of claim 44. Sturn discloses except a computer-readable storage medium having stored thereon a computer program for identifying a trend in a parameter of interest within the plurality of aligned feature images identified according to the method of claim 1, the computer program comprising a routine set of instructions for causing the computer to perform the steps from the method of claim 44. (Paragraph 38: “As one skilled in the art would recognize as necessary or best-suited for performance of the methods of the invention, a computer system or machines of the invention include one or more processors (e.g., a central processing unit (CPU) a graphics processing unit (GPU) or both), a main memory and a static memory, which communicate with each other via a bus.”; Paragraphs 49 and 54: “ At a block 304, the search model to be used for identifying a feature of interest is selected, trained and validated. .. which is then used as an evaluation model for evaluating risk of a diabetic condition.”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Abdelmoula by including a computer system or machines that is taught by Sturm, to make the invention that detecting features of interest in cardiovascular images; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving the efficient in a computer environment. Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sabuncu et al (U.S. 20070086677 A1), “Incorporating Prior Information From Pre Aligned Image Pairs into EMST based Image Registration”, teaches about a system and method for automatic image registration using prior information from pre-aligned image pairs. The previously aligned image pair is one of an image pair earlier in a sequence of registration problems and a training image pair. Young et al (U.S 20100152577 A1), “Automated Diagnosis and Alignment Supplemented with PET/MR Flow Estimation”, teaches about , a method of automated diagnosis using a positron emission tomography scanner. A diagnostic image of a region of interest is produced. A knowledge maintenance engine is consulted for data from past imaging scans and diagnoses. The diagnostic image is analyzed, identifying areas of the image that appear different from images taken of an asymptomatic control collective. Wang et al (U.S. 20150131880 A1), “Method Of, And Apparatus for , Registration of Medical Images”, teaches about an apparatus for registering medical image data representing a tubular structure comprises a data processing unit for obtaining first medical image data and second medical image data; a region identification unit for identifying the tubular structure in the first medical image data, defining in the first medical image data a volumetric region of interest, and a registration unit for performing a registration of the subset of the first medical image data with at least some of the second medical image data, wherein the registration comprises at least one of a rigid registration and an affine registration. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY TRAN/ Examiner, Art Unit 2674 /ONEAL R MISTRY/ Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Mar 01, 2023
Application Filed
Oct 11, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573024
IMAGE AUGMENTATION FOR MACHINE LEARNING BASED DEFECT EXAMINATION
2y 5m to grant Granted Mar 10, 2026
Patent 12561934
AUTOMATIC ORIENTATION CORRECTION FOR CAPTURED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12548284
METHOD FOR ANALYZING ONE OR MORE ELEMENT(S) OF ONE OR MORE PHOTOGRAPHED OBJECT(S) IN ORDER TO DETECT ONE OR MORE MODIFICATION(S), AND ASSOCIATED ANALYSIS DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12530798
LEARNED FORENSIC SOURCE SYSTEM FOR IDENTIFICATION OF IMAGE CAPTURE DEVICE MODELS AND FORENSIC SIMILARITY OF DIGITAL IMAGES
2y 5m to grant Granted Jan 20, 2026
Patent 12505539
CELL BODY SEGMENTATION USING MACHINE LEARNING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+17.5%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month