Prosecution Insights
Last updated: April 19, 2026
Application No. 18/250,955

DEEP MAGNETIC RESONANCE FINGERPRINTING AUTO-SEGMENTATION

Final Rejection §101§103§DP
Filed
Apr 27, 2023
Examiner
ROBERTS, RACHEL L
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Memorial Sloan Kettering Cancer Center
OA Round
2 (Final)
90%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
17 granted / 19 resolved
+27.5% vs TC avg
Moderate +14% lift
Without
With
+14.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
12.1%
-27.9% vs TC avg
§103
65.1%
+25.1% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
12.1%
-27.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §103 §DP
DETAIL OFFICE ACTIONS The United States Patent & Trademark Office appreciates the response filed for the current application that is submitted on 01/27/2026. The United States Patent & Trademark Office reviewed the following documents submitted and has made the following comments below. Amendment Applicant submitted amendments on 01/27/2026. The Examiner acknowledges the amendment and has reviewed the claims accordingly. Election/Restrictions Claims 8-13 is withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention as the claims were withdrawn, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on 08-12-2025. Pursuant to the procedures set forth in MPEP § 821.04(b), claims 1-7 and 14-20, previously withdrawn from consideration as a result of a restriction requirement are hereby rejoined and fully examined for patentability under 37 CFR 1.104. Because a claimed invention previously withdrawn from consideration under 37 CFR 1.142 has been rejoined, the restriction requirement between groups I (Claims 1-7 and 14-20), and group II (Claims 8-13) as set forth in the Office action mailed on 06/12/2025 is hereby withdrawn. In view of the withdrawal of the restriction requirement as to the rejoined inventions, applicant(s) are advised that if any claim presented in a divisional application is anticipated by, or includes all the limitations of, a claim that is allowable in the present application, such claim may be subject to provisional statutory and/or nonstatutory double patenting rejections over the claims of the instant application. Once the restriction requirement is withdrawn, the provisions of 35 U.S.C. 121 are no longer applicable. See In re Ziegler, 443 F.2d 1211, 1215, 170 USPQ 129, 131-32 (CCPA 1971). See also MPEP § 804.01. Information Disclosure Statement The IDS(s) dated 04/27/2023 and 08/27/2024 have been previously considered and remain placed in the application file. Priority Applicant claims the benefit of PCT/US2021/056483 having PRO 63/106,641 filed 10/28/2020. Claims 1-7 and 14-20 have been afforded the benefit of this filing date. Overview Claims 1-20 are pending in this application and have been considered below. Claims 8 and 11-13 are withdrawn. Claims 9 and 10 are cancelled. Claims 1-8, and 14-20 are pending. Applicant Arguments: In regards to the argument on Argument 1, Applicant/s state/s “1-7 and 14-20 were elected ‘with traverse, on the basis that the unelected claims can be examined without posing a serious burden’” therefore, the election/restriction should be properly addressed (See Remarks, page 10, paragraph 4). In regards to the argument on Argument 2, Applicant/s state/s “Applicant submits herewith Replacement Sheets 2 and 3 (FIGs. 2A and 2B)” therefore, the drawing objection should be withdrawn (See Remarks, page 11, paragraph 1). In regards to the argument on Argument 3, Applicant/s state/s “Without conceding the propriety of the interpretation and solely in the interest of advancing prosecutions, claims 2-5, 15, 17, 18, and 20 are amended thereby rendering the interpretation moot.” therefore, the claim interpretation should be withdrawn (See Remarks, page 11, paragraph 2). In regards to the argument on Argument 4, Applicant/s state/s “The combination of Zheng and Vellagoundar does not teach or suggest at least the above elements of the claims.” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 12, paragraph 1). In regards to the argument on Argument 5, Applicant/s state/s “the cited references fail to combine to teach or suggest "training, by the computing system, an image segmentation model using the training dataset, the image segmentation model comprising ... a generator to: generate a set of features identifying latent features in the plurality of sample tomographic biomedical images, and determine, from the set of features, a plurality of acquisition parameters defining an acquisition of the plurality of sample tomographic biomedical images from the section of the subject" as recited in the claims.” therefore, the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 12, paragraph 2). In regards to the argument on Argument 6, Applicant/s state/s “Zheng and Vellagoundar, alone or in combination, fail to teach or suggest each and every element of claims 1 and 14, independent claims 1 and 14 as well as dependent claims 1-7 and 14-20 are patentable and in condition for allowance. Accordingly, withdrawal of the rejection under 35 U.S.C. § 103 of claims 1-7 and 14-20 is respectfully requested.” therefore the rejection of 35 U.S.C. 103 should be withdrawn (See Remarks, page 14, paragraph 4). Examiner’s Responses: In response to Argument 1, Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to the restriction have been fully considered and are persuasive. Therefore, the election/restriction has been corrected above. In response to Argument 2, Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to the objections of drawings 2A and 2B have been fully considered and are persuasive. Therefore, the objection has been withdrawn due to the replacement sheets. In response to Argument 3, Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to claim interpretation have been fully considered and are persuasive. Therefore, the claim interpretation has been withdrawn due to amendment. In response to Argument 4, Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to claim 1 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. However, upon further consideration, a new ground(s) of rejection is made for Claim 1 under 35 U.S.C. 103 in view of Zheng et al (US Patent Publication US 2019/0066281 A1 hereafter referred to as Zheng) in view of Vellagoundar et al (US Patent Publication US 2020/0104674 A1 hereafter referred to as Vellagoundar) in further view of Ordy et al (US Patent Publication US 20200020098 A1 hereafter referred to as Ordy). The Examiner finds that Zheng teaches on the amended claim language “in the plurality of sample tomographic biomedical images” and “input tomographic biomedical images” and “biomedical images is synthetic” in claim 1 with the amendment changing the scope of the “a set of features identifying latent features”. Specifically, Zheng teaches a plurality of sample tomographic biomedical images in ¶0032-¶0033, ¶0046, and Fig 3 312. Zheng also teaches the tomographical images being input into the model in Fig 2, 204, ¶0006, ¶0010, and ¶0041. Zheng also teaches the synthetic image and the determination between the synthetic and non-synthetic in ¶0005, ¶0009, and ¶0047. Applicant argues that “the combination of Zheng and Vellagoundar does not teach or suggest at least the above elements of the claims”. Zhang teaches a plurality of sample tomographic images being input into a model and the model determining if the image is synthetic or not. Zheng does not disclose the specific step of generating a set of latent features. Furthermore, Zheng does not teach “generate a set of features identifying latent features” as recited in claim 1. However, the Examiner interprets that Zheng teaches the main concept of training models to segment tomographic biomedical images to identify regions of interest, the additional details of the function and characteristics of the main concepts as stated above by the applicant in the amendments is taught by Odry in the details of the rejection below. The Examiner will maintain prior art Zheng and details of the rejection are below. In response to Argument 5, Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to claim 1 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. However, upon further consideration, a new ground(s) of rejection is made for Claim 1 under 35 U.S.C. 103 in view of Zheng et al (US Patent Publication US 2019/0066281 A1 hereafter referred to as Zheng) in view of Vellagoundar et al (US Patent Publication US 2020/0104674 A1 hereafter referred to as Vellagoundar) in further view of Ordy et al (US Patent Publication US 20200020098 A1 hereafter referred to as Ordy). The Examiner finds that Vellagoundar teaches on the amended claim language “a generator” in claim 1 with the amendment changing the scope of the “a set of features identifying latent features”. Specifically, Vellagoundar teaches a generator generating corresponding acquisition parameters in ¶0023-¶0023 and Fig 10, 1010. Applicant argues that “the cited references fail to combine to teach or suggest "training, by the computing system, an image segmentation model using the training dataset, the image segmentation model comprising ... a generator to: generate a set of features identifying latent features in the plurality of sample tomographic biomedical images, and determine, from the set of features, a plurality of acquisition parameters defining an acquisition of the plurality of sample tomographic biomedical images from the section of the subject”. Vellagoundar teaches generating acquisition and scan parameter to obtain images. Vellagoundar does not disclose the specific step of generating a set of latent features. Furthermore, Vellagoundar does not teach “generate a set of features identifying latent features” as recited in claim 1. However, the Examiner interprets that Zheng teaches the main concept of training models to segment tomographic biomedical images to identify regions of interest, the additional details of the function and characteristics of the main concepts as stated above by the applicant in the amendments is taught by Vellagoundar and Odry in the details of the rejection below. The Examiner will maintain prior art Vellagoundar to teach generating acquisition parameters and details of the rejection are below. In response to Argument 6, Applicant’s arguments, see Remarks, filed 01/27/2026, with respect to claims 1 and 14 and their dependent claims have been considered but are moot in view of new ground(s) of rejection caused by the amendments. However, upon further consideration, a new ground(s) of rejection is made for Claims 1-7 and 14-20 and its dependent claims under 35 U.S.C. 103 in view of Zheng et al (US Patent Publication US 2019/0066281 A1 hereafter referred to as Zheng) in view of Vellagoundar et al (US Patent Publication US 2020/0104674 A1 hereafter referred to as Vellagoundar) in further view of Ordy et al (US Patent Publication US 20200020098 A1 hereafter referred to as Ordy). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-4, 7, 14-17, and 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. When reviewing independent claims 1, and 14 and based upon consideration of all of the relevant factors with respect to the claim as a whole, claims 1-4, 7, 14-17, and 20 are held to claim an abstract idea without reciting elements that amount to significantly more than the abstract idea and is/are therefore rejected as ineligible subject matter under 35 U.S.C. 101. The Examiner will analyze Claim 1 and similar rationale applies to independent Claim/s 14. The rationale, under MPEP § 2106, for this finding is explained below: The claimed invention (1) must be directed to one of the four statutory categories, and (2) must not be wholly directed to subject matter encompassing a judicially recognized exception, as defined below. The following two step analysis is used to evaluate these criteria. Step 1: Is the claim directed to one of the four patent-eligible subject matter categories: process, machine, manufacture, or composition of matter? When examining the claim under 35 U.S.C. 101, the Examiner interprets that the claims is related to a process since the claim is directed to a method. Step 2a, Prong 1: Does the claim wholly embrace a judicially recognized exception, which includes laws of nature, physical phenomena, and abstract ideas, or is it a particular practical application of a judicial exception? The Examiner interprets that the judicial exception applies since Claim 1 limitation of a method of training models to segment tomographic biomedical images is directed to an abstract idea. The claim is related to mathematical relationships including generating latent features, determining acquisition parameters from features, and adversarial classification. Under Recentive Analytics, training a neural network on domain-specific data is an abstract idea. Recentive Analytics, Inc. v. Fox Corp., No. 23-2437, 134 F.4th 1205 (Fed. Cir. 2025). If the claim recites a judicial exception (i.e., an abstract idea enumerated in MPEP § 2106.04(a), a law of nature, or a natural phenomenon), the claim requires further analysis in Prong Two. Step 2a, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? The Examiner interprets that Claim 1 limitation does not provide additional elements or combination of additional elements to a practical application since the claims are not adding insignificant extra-solution activity to the judicial exception. Since the claim is generally training a neural network on domain specific data. Specifically, the analysis method does not integrate a judicial exception into practical application. See Genetic Techs. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. The Examiner interprets that Claim 1 limitation does not provide additional elements or combination of additional elements to a practical application since the claims do not provide clear improvement to a technology or to computer functionality. Limiting the training to “tomographic biomedical images” is a non-qualifying field-of-use limitation. Data gathering (identifying a dataset) and storage (storing the model) are routine activities that do not add a practical application. Reciting an “image synthesizer” functionally does not reflect a specific technical improvement to MRI hardware or physical transformation. Since the claim is generally linking machine learning to the processing of the images. Specifically, the statement of using machine learning does not integrate an improvement to a technology or to computer functionality. See Genetic Techs. v. Merial LLC, 818 F.3d 1369, 1376, 118 USPQ2d 1541, 1546 (Fed. Cir. 2016) (eligibility "cannot be furnished by the unpatentable law of nature (or natural phenomenon or abstract idea) itself."). For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, For a claim reciting a judicial exception to be eligible, the additional elements (if any) in the claim must "transform the nature of the claim" into a patent-eligible application of the judicial exception, Alice Corp., 573 U.S. at 217, 110 USPQ2d at 1981, either at Prong Two or in Step 2B. If there are no additional elements in the claim, then it cannot be eligible. In such a case, after making the appropriate rejection, it is a best practice for the examiner to recommend an amendment, if possible, that would resolve eligibility of the claim. Step 2b: If a judicial exception into a practical application is not recited in the claim, the Examiner must interpret if the claim recites additional elements that amount to significantly more than the judicial exception. The Examiner interprets that the Claims do not amount to significantly more since the Claims state using machine learning with a high level of generality and the claims lack an inventive concept because the elements, considered individually and as an order combination, are well-understood, routine, and conventional (WURC) in the field. Further, the specification acknowledges that GAN architectures, VAEs, and Bloch equation simulations are known in the art. See, simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984. Claims 2-7 and 14-20 depending on the independent claim/s include all the limitation of the independent claim. The Examiner finds that Claim 2 and 15 add mathematical loss computation and parameter updates, which are further abstract mathematical concepts. This is seen as an abstract idea related to a mathematical process. The claim describes the further manipulation of data. See MPEP 2106.05(g). The Examiner finds that Claim 3 and 16 add “Matching loss” (Wasserstein distance) which is a mathematical operation for distribution matching., which are further abstract mathematical concepts. This is seen as an abstract idea related to a mathematical process. The claim describes the further manipulation of data. See MPEP 2106.05(g). The Examiner finds that Claim 4 and 17 involve a mathematical process for data augmentation. This is seen as an abstract idea related to a mathematical concept. The claim describes a mathematical update to generator/discriminator parameters using segmentor loss; meaning it fails to integrate the abstract idea into a technical practical application. See MPEP 2106.05(g). See MPEP 2106.05(h). The Examiner finds that Claim 5 and 18 recite a specific physical MRI parameter (FA, TR, TE) to configure the physical device, therefore the abstract idea is integrated into a technical practical application. The Examiner finds that Claim 6 and 19 recite a non-conventional architecture specifically in MRRN with RCUs, therefore the abstract idea is integrated into technical improvement in the data processing. The Examiner finds that Claim 7 and 20 involves data type limitations. This is seen as an abstract idea related to a mathematical concept. The claim describes adding “in vivo” acquisition and specific tissue parameters (PD, T1, T2); these are these are field of use data-type limitations, and do not impose meaningful limits on the judicial exception. See MPEP 2106.05(g). See MPEP 2106.05(h). Thus, Claims 2-4, and 7 and 14-71 and 20 recite the same abstract idea and therefore are not drawn to the eligible subject matter as they are directed to the abstract idea without significantly more. For the analogous independent claim 14 to the independent claim 1 and the analogous, the analogous limitations can be analyzed in the same way as above for the claim 1 hence rejected under 101. Moreover, the claim of 14 further recites a different statutory category of “system” which is a limitation that the examiner interprets that the claims is related to a machine since the claim is directed to a system for training models to segment tomographic biomedical images, and is consistent with the abstract ideas, including mathematical relationships including generating latent features, determining acquisition parameters from features, and adversarial classification. Under Recentive Analytics, training a neural network on domain-specific data is an abstract idea. Recentive Analytics, Inc. v. Fox Corp., No. 23-2437, 134 F.4th 1205 (Fed. Cir. 2025). This limitation of system just further implement the abstract ideas to be performed by generic computer or software/hardware components of additional elements of the different types of analyzation methods. Therefore, the Examiner interprets that the claims are rejected under 35 U.S.C. 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7 and 14-20 are rejected under 35 U.S.C. 103 as obvious over Zheng et al (US Patent Publication US 2019/0066281 A1 hereafter referred to as Zheng) in view of Vellagoundar et al (US Patent Publication US 2020/0104674 A1 hereafter referred to as Vellagoundar) in further view of Ordy et al (US Patent Publication US 20200020098 A1 hereafter referred to as Ordy). Regarding Claim 1, Zheng teaches a method of training models (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) to segment (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography) comprising: identifying, by a computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system), a training dataset (Zheng ¶0069, ¶0071, discloses identifying and choosing a training dataset) having: a plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) acquired from a section of a subject, (Zhang ¶0062 discloses the section being a cardiovascular slice), a plurality of tissue parameters (Zhang ¶0068 discloses cardiac imaging including imaging parameters) in at least one of the plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images); training, by the computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system), an image segmentation model using the training dataset (Zheng ¶0072, Table 702 discloses training the segmentor using a training data set), the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) comprising: in the plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) from the section of the subject (Zhang ¶0062 discloses the section being a cardiovascular slice); an image synthesizer (Zheng ¶0037 discloses a generator that generates a synthesized medical image) to generate a plurality of synthesized tomographic biomedical images (Zheng ¶0005, ¶0009 discloses the images synthesized being used to form a training set, therefore there are multiple synthesized images) a discriminator (Zhang ¶0047 discloses a discriminator) to determine a classification result (Zheng ¶0047 discloses a discriminator that classifies images) indicating whether an input tomographic biomedical images (Zheng ¶0041 discloses an input biomedical image) corresponding to one of the plurality of sample tomographic biomedical image (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) or the plurality of synthesized tomographic biomedical images (Zheng ¶0005, ¶0009 discloses the images synthesized being used to form a training set, therefore there are multiple synthesized images) is synthetic (Zheng ¶0047 discloses the discriminator distinguishes between the real training image or the synthesized image); and a segmentor to generate, using the input tomographic biomedical image (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient), a segmented biomedical image (Zheng ¶0010 discloses segmenting an input image), Storing, by the computing system, the image segmentation model (Zheng ¶0043, ¶0079 discloses storing the results and method on a computer system) in tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography). Zheng does not explicitly teach a plurality of tissue parameters associated with the section of the subject, and an annotation identifying at least one region on the section; a generator, defining an acquisition in accordance with the plurality of tissue parameters and the plurality of acquisition parameters; identifying the at least one region in the section of the subject; for use to identify one or more regions of interest. Vellagoundar is in the same field of medical imaging. Further, Vellagoundar teaches a plurality of tissue parameters (Vellagoundar ¶0054 disclose the functional relationship – to get a T1 weighted image you must manipulate the T1 relaxation properties which are known/standard targets) associated with the section of the subject (Vellagoundar ¶0019, ¶0064, disclose a region of interest), and an annotation identifying at least one region (Vellagoundar ¶0019, disclose a region of interest and annotating anatomical characteristics) on the section (Vellagoundar ¶0019, ¶0064, disclose a region of interest) a generator (Vellagoundar Fig 10 1010, ¶0022 discloses determining scan parameters using a neural network for taking a medical image ¶0023- ¶0025 discloses generating corresponding acquisition parameters ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc.), the plurality of acquisition parameters defining an acquisition (Vellagoundar Fig 10 1010, ¶0022 discloses determining scan parameters using a neural network for taking a medical image ¶0023- ¶0025 discloses generating corresponding acquisition parameters ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc.) in accordance with the plurality of tissue parameters (Vellagoundar ¶0054 disclose a plurality of tissues parameters, including proton density, T1, and T2) and the plurality of acquisition parameters (Vellagoundar ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc.); identifying the at least one region on the section of the subject (Vellagoundar ¶0019, disclose a region of interest and annotating anatomical characteristics); for use to identify one or more regions of interest (Vellagoundar ¶0019, ¶0064, disclose a region of interest). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zheng by incorporating the tissue and acquisition parameters for taking the medical images as well as the feature map used for the machine learning model as taught by Vellagoundar, to make an invention that can acquire medical images corresponding to certain parameters to create an optimal segmentation of the method to be able to correctly diagnose a medical condition; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to reduce setup and scan acquisition time while also preserving and improving image quality without depending entirely on user expertise (Vellagoundar,¶0025-¶0026, ¶0043). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Zheng and Vellagoundar in combination so not explicitly disclose generate a set of features identifying latent features and determine, from the set of features. Odry is in the same field of image processing of medical images using neural networks. Further Odry teaches generate a set of features identifying latent features (Odry ¶0048, ¶0077, ¶0111 ¶0120, discloses generating a set of feature maps with latent variables) determine, from the set of features (Odry ¶0120 discloses a set of matrices which are feature maps, ¶0015 and Fig 6A disclose latent variables that correspond to a set of training images). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zheng in view of Vellagoundar by incorporating a set of identifying latent features as taught by Odry, to make an invention that can acquire medical images and determine features that are most important to be able to correctly diagnose a medical condition; thus, one of ordinary skilled in the art would be motivated to combine the references since there is a need for methods that can be easily integrated into the scanning protocol and may impact throughput, triage, and treatment workflow. Thus, the method can be advantageous to healthcare (HC) and diagnostic imaging (DI) applications of MR. The method can also be used in a regular reading setting to prioritize reading of abnormal cases. (Odry ,¶0173). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Zheng in view of Vellagoundar in further view of Odry teaches the method of claim 1, wherein training the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) further comprises: determining a segmentation loss metric (Zheng ¶0067, ¶0060, disclose determining a segmentation loss) based on the segmented tomographic biomedical image (¶0062 discloses the input medical image being segmented by the segmentor) and the annotation (Vellagoundar ¶0019, disclose a region of interest and annotating anatomical characteristics); and updating one or more parameters of at least one of the generator (Zhang ¶0059 discloses updating the parameters of the generator to optimize the network), the discriminator (Zhang ¶0047 discloses a discriminator), or the segmentor (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient) of the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) using the segmentation loss metric (Zhang ¶0059 discloses updating the network using the cyclical and shape losses from the segment loss). See rationale for Claim 1 (its parent claim). Regarding Claim 3, Zheng in view of Vellagoundar in further view of Odry teaches the method of claim 1, wherein training the image segmentation model (Zheng ¶0072, Table 702 discloses training the segmentor using a training data set), further comprises: determining a matching loss metric (Zheng ¶0078 discloses generating the chape consistency loss based on the best matched shape) based on the plurality of sample tomographic biomedical image (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) and the corresponding plurality of synthesized tomographic biomedical image (Zheng ¶0078 discloses generating the segmentation score based off of the ground truth from the corresponding real volumes of the images, Zheng ¶0078 discloses training the model with the 3D synthesized images); and updating one or more parameters of at least one of the generator (Zhang ¶0059 discloses updating the parameters of the generator to optimize the network), the discriminator (Zhang ¶0047 discloses a discriminator), or the segmentor (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient) of the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) using the matching loss metric (Zhang ¶0059 discloses updating the network using the cyclical and shape losses from the segment loss). See rationale for Claim 1 (its parent claim). Regarding Claim 4, Zheng in view of Vellagoundar in further view of Odry teaches the method of claim 1, wherein training the image segmentation model (Zheng ¶0072, Table 702 discloses training the segmentor using a training data set) further comprises updating one or more parameters of at least one of the generator (Zhang ¶0059 discloses updating the parameters of the generator to optimize the network) and the discriminator (Zhang ¶0047 discloses a discriminator) using a loss metric associated with the segmentor (Zhang ¶0059 discloses updating the network using the cyclical and shape losses from the segment loss, Zhang ¶0078 discloses an segmentation score). See rationale for Claim 1 (its parent claim). Regarding Claim 5, Zheng in view of Vellagoundar in further view of Odry teaches the method of claim 1, wherein storing the image segmentation model (Zheng ¶0043, ¶0079 discloses storing the results and method on a computer system) further comprises providing, responsive to training of the image segmentation model (Vellagoundar ¶0037 discloses the computer system acting in response), the plurality of acquisition parameters for acquisition of the tomographic biomedical images, (Vellagoundar Fig 10 1010, ¶0022 discloses determining scan parameters using a neural network for taking a medical image ¶0023- ¶0025 discloses generating corresponding acquisition parameters ) the plurality of acquisition parameters identifying at least one of a flip angle (FA), a repetition time (TR), or an echo time (TE) (Vellagoundar ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc. ¶0055 discloses "flip angle (FA), echo time (TE), repetition time (TR),inversion time (TI), echo train length (ETL),"). See rationale for Claim 1 (its parent claim). Regarding Claim 6, Zheng in view of Vellagoundar in further view of Odry teaches the method of claim 1, wherein the segmentor (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient) of the image segmentation model (Zheng ¶0072, Table 702 discloses training the segmentor using a training data set) comprises a plurality of residual layers corresponding to a plurality of resolutions (Zheng ¶0063, ¶0065 disclose residual layers corresponding to resolutions) to generate the segmented tomographic biomedical image (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images, ¶0032-¶0033 discloses the images being from any medical domain including tomography), each of the plurality of residual layers having one or more residual connection units (RCUs) (Zheng ¶0068 discloses residual networks) to process at least one feature map (Vellagoundar ¶0004-¶0005, ¶0006 disclose that maps the image datasets to corresponding image quality metrics) for a corresponding resolution of the plurality of resolutions (Zheng ¶0063, ¶0065 disclose residual layers corresponding to resolutions). See rationale for Claim 1 (its parent claim). Regarding Claim 7, Zheng in view of Vellagoundar in further view of Odry teaches the method of claim 1, wherein each of the plurality of sample tomographic biomedical images is acquired (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) from the section of the subject in vivo (Zheng ¶0068 discloses patients with various cardiovascular diseases) via magnetic resonance imaging (Zheng ¶0062 discloses the input image being an WRI image of a cardiovascular slice), the plurality of tissue parameters identifying at least one of proton density (PD), a longitudinal relaxation time (T1), or a transverse relaxation time (T2) (Vellagoundar ¶0054 disclose a plurality of tissues parameters, including proton density, T1, and T2) for the acquisition of the plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images). See rationale for Claim 1 (its parent claim). Regarding Claim 14, Zheng teaches a system (Zheng ¶0016 Fig 1 discloses a system) for training models (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) to segment (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography), comprising: a computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system) having one or more processors coupled with memory (Zheng ¶0079, ¶0082, ¶0084 discloses a processor and memory), configured to: identify a training dataset (Zheng ¶0069, ¶0071, ¶ discloses identifying and choosing a training dataset) having: a plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) acquired from a section of a subject (Zhang ¶0062 discloses the section being a cardiovascular slice), in at least one of the plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images); train an image segmentation model using the training dataset, (Zheng ¶0072, Table 702 discloses training the segmentor using a training data set), the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) comprising: in the plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) from the section of the subject (Zhang ¶0062 discloses the section being a cardiovascular slice); an image synthesizer (Zheng ¶0037 discloses a generator that generates a synthesized medical image) to generate a plurality of synthesized tomographic biomedical images (Zheng ¶0005, ¶0009 discloses the images synthesized being used to form a training set, therefore there are multiple synthesized images) a discriminator (Zhang ¶0047 discloses a discriminator) to determine a classification result (Zheng ¶0047 discloses a discriminator that classifies images) indicating whether an input tomographic biomedical images (Zheng ¶0041 discloses an input biomedical image) corresponding to one of the plurality of sample tomographic biomedical image (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) or the plurality of synthetic tomographic biomedical images (Zheng ¶0005, ¶0009 discloses the images synthesized being used to form a training set, therefore there are multiple synthesized images) is synthesized (Zheng ¶0047 discloses the discriminator distinguishes between the real training image or the synthesized image); and a segmentor to generate, using the input tomographic biomedical image (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient), a segmented biomedical image (Zheng ¶0010 discloses segmenting an input image), Storing, by the computing system, the image segmentation model (Zheng ¶0043, ¶0079 discloses storing the results and method on a computer system), in tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography). Zheng does not explicitly teach a plurality of tissue parameters associated with the section of the subject, and an annotation identifying at least one region on the section; a generator, the plurality of acquisition parameters defining an acquisition in accordance with the plurality of tissue parameters and the plurality of acquisition parameters; identifying the at least one region in the section of the subject; for use to identify one or more regions of interest. Vellagoundar is in the same field of medical imaging. Further, Vellagoundar teaches a plurality of tissue parameters (Vellagoundar ¶0054 disclose a plurality of tissues parameters, including proton density, T1, and T2) associated with the section of the subject (Vellagoundar ¶0019, ¶0064, disclose a region of interest), and an annotation identifying at least one region (Vellagoundar ¶0019, disclose a region of interest and annotating anatomical characteristics) on the section (Vellagoundar ¶0019, ¶0064, disclose a region of interest) a generator (Vellagoundar Fig 10 1010, ¶0022 discloses determining scan parameters using a neural network for taking a medical image ¶0023- ¶0025 discloses generating corresponding acquisition parameters ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc.), the plurality of acquisition parameters defining an acquisition (Vellagoundar Fig 10 1010, ¶0022 discloses determining scan parameters using a neural network for taking a medical image ¶0023- ¶0025 discloses generating corresponding acquisition parameters ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc.) in accordance with the plurality of tissue parameters (Vellagoundar ¶0054 disclose a plurality of tissues parameters, including proton density, T1, and T2) and the plurality of acquisition parameters (Vellagoundar ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc.); identifying the at least one region in the section of the subject (Vellagoundar ¶0019, disclose a region of interest and annotating anatomical characteristics); for use to identify one or more regions of interest (Vellagoundar ¶0019, ¶0064, disclose a region of interest). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zheng by incorporating the tissue and acquisition parameters for taking the medical images as well as the feature map used for the machine learning model as taught by Vellagoundar, to make an invention that can acquire medical images corresponding to certain parameters to create an optimal segmentation of the method to be able to correctly diagnose a medical condition; thus, one of ordinary skilled in the art would be motivated to combine the references since an object of the present invention is to reduce setup and scan acquisition time while also preserving and improving image quality without depending entirely on user expertise (Vellagoundar,¶0025-¶0026, ¶0043). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Zheng and Vellagoundar in combination do not explicitly disclose generate a set of features identifying latent features and determine, from the set of features. Odry is in the same field of image processing of medical images using neural networks. Further Odry teaches generate a set of features identifying latent features (Odry ¶0048, ¶0077, ¶0111 ¶0120, discloses generating a set of feature maps with latent variables) determine, from the set of features (Odry ¶0120 discloses a set of matrices which are feature maps, ¶0015 and Fig 6A disclose latent variables that correspond to a set of training images). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Zheng in view of Vellagoundar by incorporating a set of identifying latent features as taught by Odry, to make an invention that can acquire medical images and determine features that are most important to be able to correctly diagnose a medical condition; thus, one of ordinary skilled in the art would be motivated to combine the references since there is a need for methods that can be easily integrated into the scanning protocol and may impact throughput, triage, and treatment workflow. Thus, the method can be advantageous to healthcare (HC) and diagnostic imaging (DI) applications of MR. The method can also be used in a regular reading setting to prioritize reading of abnormal cases. (Odry ,¶0173). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 15, Zheng in view of Vellagoundar in further view of Odry teaches the system of claim 14, wherein the computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system) is further configured to train the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) model by: determining a segmentation loss metric (Zheng ¶0067, ¶0060, disclose determining a segmentation loss) based on the segmented tomographic biomedical image (¶0062 discloses the input medical image being segmented by the segmentor) and the annotation (Vellagoundar ¶0019, disclose a region of interest and annotating anatomical characteristics); and updating one or more parameters of at least one of the generator (Zhang ¶0059 discloses updating the parameters of the generator to optimize the network), the discriminator (Zhang ¶0047 discloses a discriminator), or the segmentor (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient) of the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) using the segmentation loss metric (Zhang ¶0059 discloses updating the network using the cyclical and shape losses from the segment loss). See rationale for Claim 14 (its parent claim). Regarding Claim 16, Zheng in view of Vellagoundar in further view of Odry teaches the system of claim 14, wherein the computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system) is further configured to train the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) model by: determining a matching loss metric (Zheng ¶0078 discloses generating the shape consistency loss based on the best matched shape) based on the plurality of sample tomographic biomedical image (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) and the corresponding plurality of synthesized tomographic biomedical image (Zheng ¶0078 discloses generating the segmentation score based off of the ground truth from the corresponding real volumes of the images, Zheng ¶0078 discloses training the model with the 3D synthesized images); and updating one or more parameters of at least one of the generator (Zhang ¶0059 discloses updating the parameters of the generator to optimize the network), the discriminator (Zhang ¶0047 discloses a discriminator), or the segmentor (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient) of the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) using the matching loss metric (Zhang ¶0059 discloses updating the network using the cyclical and shape losses from the segment loss). See rationale for Claim 14 (its parent claim). Regarding Claim 17, Zheng in view of Vellagoundar in further view of Odry teaches the system of claim 14, wherein the computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system) is further configured to train the image segmentation (Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images) model (Zheng ¶0004, ¶0005, discloses a method to train a machine learning model) model by updating one or more parameters of at least one of the generator (Zhang ¶0059 discloses updating the parameters of the generator to optimize the network) or the discriminator (Zhang ¶0047 discloses a discriminator) using a loss metric associated with the segmentor (Zhang ¶0059 discloses updating the network using the cyclical and shape losses from the segment loss, Zhang ¶0078 discloses an segmentation score). See rationale for Claim 14 (its parent claim). Regarding Claim 18, Zheng in view of Vellagoundar in further view of Odry teaches the system of claim 14, wherein the computing system (Zheng ¶0081, discloses the system method and apparatus being performed by a computing system) is further configured to provide, responsive to training of the image segmentation model (Vellagoundar ¶0037 discloses the computer system acting in response), the plurality of acquisition parameters for acquisition of the tomographic biomedical images,(Vellagoundar Fig 10 1010, ¶0022 discloses determining scan parameters using a neural network for taking a medical image ¶0023- ¶0025 discloses generating corresponding acquisition parameters ) the plurality of acquisition parameters identifying at least one of a flip angle (FA), a repetition time (TR), or an echo time (TE) (Vellagoundar ¶0040 discloses a plurality of scan parameters including scan repeat time, scan resolution, inter-echo spacing, bandwidth, time-to-echo, etc. ¶0055 discloses "flip angle (FA), echo time (TE), repetition time (TR),inversion time (TI), echo train length (ETL),"). See rationale for Claim 14 (its parent claim). Regarding Claim 19, Zheng in view of Vellagoundar in further view of Odry teaches the system of claim 14, wherein the segmentor (Zhang ¶0010 discloses a segmentor that segments the input medical image of the patient) of the image segmentation model (Zheng ¶0072, Table 702 discloses training the segmentor using a training data set) comprises a plurality of residual layers corresponding to a plurality of resolutions (Zheng ¶0063, ¶0065 disclose residual layers corresponding to resolutions) to generate the segmented tomographic biomedical image(Zheng ¶0035, ¶0036 and Fig 2 discloses segmenting images, ¶0032-¶0033 discloses the images being from any medical domain including tomography) , each of the plurality of residual layers having one or more residual connection units (RCUs) (Zheng ¶0068 discloses residual networks) to process at least one feature map (Vellagoundar ¶0004-¶0005, ¶0006 disclose that maps the image datasets to corresponding image quality metrics) for a corresponding resolution of the plurality of resolutions (Zheng ¶0063, ¶0065 disclose residual layers corresponding to resolutions). See rationale for Claim 14 (its parent claim). Regarding Claim 20, Zheng in view of Vellagoundar in further view of Odry teaches the system of claim 14, wherein each of the plurality of sample tomographic biomedical images is acquired (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images) from the section of the subject in vivo (Zheng ¶0068 discloses patients with various cardiovascular diseases) via magnetic resonance imaging (Zheng ¶0062 discloses the input image being an WRI image of a cardiovascular slice), the plurality of tissue parameters identifying at least one of proton density (PD), a longitudinal relaxation time (T1), or a transverse relaxation time (T2) (Vellagoundar ¶0054 disclose a plurality of tissues parameters, including proton density, T1, and T2) for the acquisition of the plurality of sample tomographic biomedical images (Zheng, ¶0032-¶0033 discloses the images being from any medical domain including tomography, ¶0046, Fig 3 312, disclose a set of training images). See rationale for Claim 14 (its parent claim). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL LYNN ROBERTS whose telephone number is (571)272-6413. The examiner can normally be reached Monday- Friday 7:30am- 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oneal Mistry can be reached on (313) 446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL L ROBERTS/Examiner, Art Unit 2674 /Ross Varndell/Primary Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Apr 27, 2023
Application Filed
Aug 27, 2025
Non-Final Rejection — §101, §103, §DP
Jan 05, 2026
Interview Requested
Jan 13, 2026
Examiner Interview Summary
Jan 27, 2026
Response Filed
Mar 12, 2026
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581132
LARGE-SCALE POINT CLOUD-ORIENTED TWO-DIMENSIONAL REGULARIZED PLANAR PROJECTION AND ENCODING AND DECODING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569208
PET APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564324
IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING SYSTEM FOR ABNORMALITY DETECTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561773
METHOD AND APPARATUS FOR PROCESSING IMAGE, ELECTRONIC DEVICE, CHIP AND MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12525028
CONTACT OBJECT DETECTION APPARATUS AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.3%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month