Prosecution Insights
Last updated: April 19, 2026
Application No. 18/282,695

BRAIN IMAGE SEGMENTATION USING TRAINED CONVOLUTIONAL NEURAL NETWORKS

Non-Final OA §102§103
Filed
Sep 18, 2023
Examiner
ISLAM, PROMOTTO TAJRIAN
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Owl Navigation Inc.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
28 granted / 36 resolved
+15.8% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
17.4%
-22.6% vs TC avg
§103
45.2%
+5.2% vs TC avg
§102
14.6%
-25.4% vs TC avg
§112
17.7%
-22.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of group I (claims 1-17) in the reply filed on 12/10/2025 is acknowledged. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8, 12, and 16-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Deng et al. (“Learning-based 3T brain MRI segmentation with guidance from 7T MRI labeling”, DOI: https://doi.org/10.1118/1.4967487, Publication Year: 2016; hereinafter “Deng”). Regarding Claim 1, Deng discloses a method for generating image file pairs usable to train deep learning models, comprising (see Fig. 1 and 2.A. Data acquisition and image processing): receiving access to a first image file of a three-dimensional body portion of a subject including structures of interest (SOI), wherein the first image file is defined by a first coordinate space and a first effective resolution such as a contrast resolution (2.A. Data acquisition and image processing, Deng discloses obtaining 3T MR images. The Examiner notes that the 3T MR image are in 3 dimensions.); receiving access to a second image file of the three-dimensional body portion of the subject including the SOI, wherein the second image file is defined by a second coordinate space and a second effective resolution such as a contrast resolution that is greater than the first resolution (2.A. Data acquisition and image processing, Deng discloses obtaining 7T MR images. The Examiner notes that the 7T MR image are in 3 dimensions.; segmenting the SOI in the second image file (2.A. Data acquisition and image processing, Deng discloses segmenting the 7T MR images to obtain tissue segmentation and a brain mask.); transforming the segmented SOI into the first coordinate space to create a third image file of the three-dimensional body portion of the subject including the SO (2.A. Data acquisition and image processing, Deng discloses propagating (i.e., transforming) the brain mask and tissue segmentation from the 7T MR image to the corresponding 3T MR image.); and wherein the first image file and the third image file are usable to train deep learning models for segmentation of SOI in patient images corresponding to the SOI in the first image file and the third image file (4. Discussions and Conclusion, Deng discloses that deep learning algorithms can benefit (i.e., “are usable”) from the segmentation information provided by the 7T and 3T MRI images.). Regarding Claim 2, Deng discloses the method of claim 1 wherein receiving access to the first image file includes receiving access to a first image file produced by a scanner defined by a field strength less than or equal to about three Tesla (2.A. Data acquisition and image processing, Deng discloses obtaining 3T MR images from a 3T Siemens Trio scanner.). Regarding Claim 3, Deng discloses the method of claim 1 wherein receiving access to the second image file includes receiving access to a second image file produced by a scanner defined by a field strength greater than or equal to about five Tesla (2.A. Data acquisition and image processing, Deng discloses obtaining 7T MR images from a 7T Siemens ultra-high field MRI scanner.). Regarding Claim 4, Deng discloses the method of claim 1 wherein receiving access to the second image file includes receiving access to a second image file produced by a scanner defined by a field strength greater than or equal to about seven Tesla (2.A. Data acquisition and image processing, Deng discloses obtaining 7T MR images from a 7T Siemens ultra-high field MRI scanner.). Regarding Claim 5, Deng discloses the method of claim 1 wherein: receiving the first image file includes receiving a first image file produced by a first scanner; and receiving the second image file includes receiving a second image file produced by a second scanner different than the first scanner (2.A. Data acquisition and image processing, Deng discloses obtaining 3T MR images (obtained from a 3T Siemens Trio scanner) and 7T MR images (obtained from a 7T Siemens ultra-high field MRI scanner).). Regarding Claim 6, Deng discloses the method of claim 1 wherein the three dimensional body portion includes a head of the subject (Fig. 1, 2.A. Data acquisition and image processing, Deng discloses obtaining brain MR images.). Regarding Claim 8, Deng discloses the method of claim 1, wherein segmenting the SOI in the second image includes manually segmenting the SOI (2.A. Data acquisition and image processing, Deng discloses segmenting the 7T MR images, which involves first using software such as FSL to generate baseline segmentations followed by performing manual corrections (i.e., manual segmenting) to obtain the final segmentations.). Regarding Claim 12, Deng discloses the method of claim 1, wherein transforming the SOI includes electronically transforming the SOI (2.A. Data acquisition and image processing, 3.D. Computational time, Deng discloses propagating (i.e., transforming) the brain mask and tissue segmentation from the 7T MR image to the corresponding 3T MR image. The Examiner notes that the methods disclosed by Deng are performed on a computing cluster, and therefore the propagating/transforming process is performed electronically.). Regarding Claim 16, Deng discloses the method of claim 1, wherein segmenting the SOI comprises segmenting a region of interest including the SOI, wherein the region of interest comprised a portion of the body portion defined by the second image file (Fig. 1, 2.A. Data acquisition and image processing, 3.D. Computational time. 3.C. Comparisons using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset, Deng discloses performing tissue segmentation (i.e., region of interest), which includes different structures of the brain such as white matter and gray matter (and more specifically, the Hippocampus).). Regarding Claim 17, Deng discloses a computer system operable to provide any or all of the functionality of claim 1 (3.D. Computational Time, Deng discloses utilizing a computing cluster to perform the methods disclosed by Deng.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7 and 9 are rejected as being unpatentable over Deng in view of Lee et al. (US 2023/0225629; hereinafter “Lee”). Regarding Claim 7, Deng discloses the method of claim 1. Deng does not explicitly disclose wherein the SOI includes one or more of a subthalamic nucleus, globus pallidus, red nucleus, substantia nigra, thalamus, or caudate nucleus (The Examiner notes for the record that all of the claimed structures here are present (i.e., “include”) in the brain MRI images obtained by Deng, even if not explicitly notated.). Lee discloses includes one or more of a subthalamic nucleus, globus pallidus, red nucleus, substantia nigra, thalamus, or caudate nucleus ([0104], Lee discloses segmenting from an MRI image the thalamus, caudate, globus pallidus, subthalamic nucleus, substantia nigra, and red nucleus.). Deng and Lee are considered to be analogous to the claimed invention as they are in the same field of performing image analysis and segmentation on brain MRI images. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Deng such that the MRI images obtained by Deng specifically included and distinguished the SOI disclosed by Lee. The motivation for this combination being the ability to specifically locate these SOI in a brain MRI image so that they can be used for further analysis. Regarding Claim 9, Deng discloses the method of claim 1. Deng does not explicitly disclose wherein transforming the SOI includes affine transforming the SOI. Lee discloses wherein transforming the SOI includes affine transforming the SOI ([0104], Lee discloses performing an affine registration (i.e., affine transform) between two MRI volumes.). Deng and Lee are considered to be analogous to the claimed invention as they are in the same field of performing image analysis and segmentation on brain MRI images. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Deng such that the propagating (i.e., transforming) of the brain mask and tissue segmentation from the 7T MR image to the corresponding 3T MR image was performed based on the affine transform disclosed by Lee. The motivation for this being the ability to use a relatively-simple linear transformation which improves computational efficiency. Claims 10-11 are rejected as being unpatentable over Deng in view of Li et al. (“Accuracy of 3D volumetric image registration based on CT, MR, and PET/CT phantom experiments”, DOI: 10.1120/jacmp.v9i4.2781, Publication Year: 2008; hereinafter “Li”). Regarding Claim 10, Deng discloses the method of claim 1. Deng does not explicitly disclose wherein transforming the SOI includes transforming the SOI to an accuracy of less than or equal to about one voxel. Li discloses wherein transforming the SOI includes transforming the SOI to an accuracy of less than or equal to about one voxel (Introduction, D. Accuracy of MR/MR phantom image registration, Li discloses a 3DVIR technique for 3D image registration, which can accuracy within 1/10 voxel. Specifically, Li notes that for MR image recognition 3DVIR can obtain accuracy within 0.04 ± 0.10 voxels (0.03 ± 0.07 mm).). Deng and Li are considered to be analogous to the claimed invention as they are in the same field of performing image analysis and transformations on MRI images. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Deng such that the propagating (i.e., transforming) of the brain mask and tissue segmentation from the 7T MR image to the corresponding 3T MR image was performed by the methods disclosed by Li. The motivation for this combination being the ability to utilize a registration process with high accuracy. Regarding Claim 11, Deng discloses the method of claim 1. Deng does not explicitly disclose wherein transforming the SOI includes transforming the SOI to an accuracy of less than or equal to about 1mm. Li discloses wherein transforming the SOI includes transforming the SOI to an accuracy of less than or equal to about 1mm (Introduction, D. Accuracy of MR/MR phantom image registration, Li discloses a 3DVIR technique for 3D image registration, which can accuracy within 1/10 voxel. Specifically, Li notes that for MR image recognition 3DVIR can obtain accuracy within 0.04 ± 0.10 voxels (0.03 ± 0.07 mm).). Deng and Li are considered to be analogous to the claimed invention as they are in the same field of performing image analysis and transformations on MRI images. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Deng such that the propagating (i.e., transforming) of the brain mask and tissue segmentation from the 7T MR image to the corresponding 3T MR image was performed by the methods disclosed by Li. The motivation for this combination being the ability to utilize a registration process with high accuracy. Claim 13 is rejected as being unpatentable over Deng in view of Wei et al. (“7T Guided 3T Brain Tissue Segmentation Using Cascaded Nested Network”, DOI: 10.1109/ISBI45749.2020.9098617, Publication Year: 2020; hereinafter “Wei”). Regarding Claim 13, Deng discloses the method of claim 1. Deng does not explicitly disclose quality checking the SOI in the third image file. Wei discloses quality checking the SOI in the third image file (2.1. Correlation Coefficient Map, Wei discloses calculating a correlation coefficient map as a way to control misalignment errors by applying large weights to well-aligned voxels and small weights to miss-aligned voxels.). Deng and Wei are considered to be analogous to the claimed invention as they are in the same field of performing image analysis and segmentation on brain MRI images. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Deng such that after transformation, a quality check was performed using the correlation coefficient map disclosed by Wei. The motivation for this combination being the ability to make fine-tuned adjustments and account for misalignment errors that could be produced after performing a transformation. Allowable Subject Matter Claims 14-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: The limitations presented in claim 14 have been searched and no reference has been found to teach the disclosed limitations. Furthermore, none of the cited prior art or art found during search provide a motivation (alone or in combination) to teach the disclosed limitations. The Examiner notes the following art: Moteabbed et al. (“Validation of a deformable image registration technique for cone beam CT-based dose verification”, DOI: 10.1118/1.4903292, Publication Year: 2014) discloses a method for validating (i.e., quality checking) a registration between two CT images (a CT image and a CBCT image). The process involves applying vector fields to both images to generate a warped (i.e., modified) wCT image and warped wCBCT image. The original CT image is registered to the warped wCBCT image to generate a wCT2 image, and the quality of the registration is determined based on comparing the warped wCT image and the registered wCT2 image (see 2.B. DIR Validation Workflow, Fig 2). While Moteabbed discloses a clear comparison step, there is no clear motivation or evidence which suggests that the images utilized by Moteabbed are analogous to the claimed different “image files” recited in claim 14. Yang et al. (“A fast inverse consistent deformable image registration method based on symmetric optical flow computation”, DOI: 10.1088/0031-9155/53/21/017, Publication Year: 2014) discloses the calculation of inverse consistency error which can be applied to image transformation does with two MRI images (see Fig. 1). While the comparison step disclosed by Yang is similar to the claimed comparison step, there is similarly no motivation or evidence to suggest how the disclosure of Deng could be modified by the disclosure of Yang to teach the limitations of claim 14. Brock et al. (“Use of image registration and fusion algorithms and techniques in radiotherapy: Report of the AAPM Radiation Therapy Committee Task Group No. 132, DOI: https://doi.org/10.1002/mp.12256, Publication Year: 2017) discloses utilizing image registration for segmentation in radiology images, and specifically highlights different quantitate measures to determine the accuracy of image registration such as target registration error (see 4.C. Quantitative measures of image registration accuracy). Similarly to the other noted references, the comparison step between landmarks is similar to the comparing step of claim 14, however, there is no disclosure regarding further projecting an already transformed image (i.e., third image file) to a common coordinate space with another image. Claim 15 is dependent on claim 14, and incorporates all of the aforementioned objected allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PROMOTTO TAJRIAN ISLAM whose telephone number is (703)756-5584. The examiner can normally be reached Monday - Friday 8:30 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at (571) 272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PROMOTTO TAJRIAN ISLAM/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Sep 18, 2023
Application Filed
Mar 13, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601732
CELL IMAGE ANALYSIS DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12586174
Method for Processing Yarn Spindle Data, Electronic Device and Storage Medium
2y 5m to grant Granted Mar 24, 2026
Patent 12579674
IMAGE SELECTION APPARATUS, IMAGE SELECTION METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12567257
Method and Apparatus for Obstacle Recognition, Device, Medium, and Robot Lawn Mower
2y 5m to grant Granted Mar 03, 2026
Patent 12555401
Auto-Document Detection & Capture
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
95%
With Interview (+17.5%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month