Prosecution Insights
Last updated: April 19, 2026
Application No. 18/493,698

SYSTEMS AND METHODS FOR REDUCING ARTIFACT IN MEDICAL IMAGES USING SIMULATED IMAGES

Non-Final OA §101§103
Filed
Oct 24, 2023
Examiner
SATCHER, DION JOHN
Art Unit
2676
Tech Center
2600 — Communications
Assignee
GE Precision Healthcare LLC
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
33 granted / 39 resolved
+22.6% vs TC avg
Moderate +14% lift
Without
With
+14.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
29 currently pending
Career history
68
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
61.9%
+21.9% vs TC avg
§102
15.1%
-24.9% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 39 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Preliminary Amendment The Preliminary Amendment submitted on 10/24/2023 has been entered and made of record. Status of Claims This communication is in response to the Application Filed on 10/24/2023 Claims 1–20 are pending in this application. Drawings The drawing(s) filed on 10/24/2023 are accepted by the Examiner. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/24/2023 and 01/15/2024 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim(s) 1–20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, are mathematical concepts and computation. The independent claim(s) 1, 11 and 15 recite(s) a method, a system and a method respectively. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done using mathematical computation and concepts and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory). According to the USPTO guidelines, a claim is directed to non-statutory subject matter if: STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis: STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon? STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Using the two-step inquiry, it is clear that the independent claims 1, 11 and 15 are directed to an abstract idea as shown below: STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1, 11 and 15 are directed to a method, a system and a method respectively. STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward mathematical computation and concepts (i.e. abstract idea). With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas: Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations; Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion). Independent claims 1, 11 and 15 comprise mathematical computations and concepts (or generic computers or components configured to perform the method) and, therefore, an abstract idea. Regarding independent claim(s) 1: the limitations recite: Mathematical concepts/data manipulation: “training an artifact removal neural network using the set of training image pairs” –– training a model is a mathematical algorithm; see PEG, Abstract Ideas: mathematical concepts. “generating an output of the artifact removal neural network based on an inputted acquired image” –– using a trained model to compute output values; mathematical computation. “generating a set of training image pairs”, “generating images from RGB images, simulating motion… simulating contrast… simulating phase contrast dynamics” –– data preparation and simulation steps are mathematical processing of image data. Regarding independent claim(s) 11: the limitations recite: Mathematical concepts implemented on generic computer: “receive a plurality of simulated MR images…” –– data intake. “generate an undersampled version of the simulated MR image” –– mathematical undersampling in k-space is a mathematical operation. “create… image pairs…”, “train the neural network…”, “generate artifact-reduced images…” — data manipulation and machine learning computations. “display the artifact-reduced images…” — presentation of data. Regarding independent claim(s) 15: the limitations recite: Mathematical concepts/data manipulation: “simulating a motion phase…”, “simulating a contrast phase…” (M4), “simulating phase contrast dynamics…” — simulations of image data are mathematical operations. “generating simulated MR images…” — mathematical generation of images. STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application. With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application: an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field; an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition; an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim; an additional element effects a transformation or reduction of a particular article to a different state or thing; and an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception. While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application: an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea; an additional element adds insignificant extra-solution activity to the judicial exception; and an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use. Independent claims 1, 11 and 15 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application. Independent claims 1, 11 and 15 discloses a generic computer components, memory, processor and a non-transitory computer-readable storage medium, which are generic computer components and/or insignificant pre/post-solution extra activity that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea in a system. These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claims do not recite additional elements that amount to significantly more than the judicial exception. With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements: adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present. Independent claim(s) 1, 11 and 15 do not recite any additional elements that are not well-understood, routine or conventional. The use of a generic computer elements are routine, well-understood and conventional process that is performed by computers. Thus, since independent claims 1, 11 and 15 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that independent claims 1, 11 and 15 are not eligible subject matter under 35 U.S.C 101. Regarding claims 2–6, 13, 16 and 18: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical process. They are just steps in data gathering and data preparation. Regarding claims 7, 12, 14 and 17: the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical process. They are just mathematical computations. Regarding claims 8–10 and 19–20, : the additional limitations do not integrate the mathematical concept into practical application or add significantly more to the mathematical process. They are just mathematical computation that is applied using the neural network. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claim(s) 1, 2, 5–10, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Schlemper et al. (US 20200294287 A1, hereafter, "Schlemper") in view of Wu et al. (See NPL attached, "Image-based motion artifact reduction on liver dynamic contrast enhanced MRI", hereafter, "Wu"). Regarding claim 1, Schlemper teaches a method for an image processing system (See Schlemper, [Abstract], Techniques for generating magnetic resonance (MR) images from MR data obtained by a magnetic resonance imaging (MRI) system comprising a plurality of RF coils configured to detect RF signals), comprising: generating a plurality of simulated images (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data, but also reduces the demand to obtain greater amounts of real-world MRI images needed for training, which can be expensive, time-consuming, and impractical); [generating a set of training image pairs based on the plurality of simulated images; training an artifact removal neural network using the set of training image pairs; and generating an output of the artifact removal neural network based on an inputted acquired image, wherein generating the plurality of simulated images comprises generating images from RGB images, simulating motion in the simulated images], simulating contrast in the simulated images (See Schempler, ¶ [0197], In some embodiments, the histogram augmentation function I(r) generated at 510 may be used to change the intensity variations in regions of the image to simulate various effects, including, but not limited to the effect of RF coil correlation and/or to provide different contrasts that may occur in multi-echo pulse sequences. Note: Examiner is interpreting the histogram augmentation as the simulating contrast), and simulating phase contrast dynamics in the simulated images (See Schempler, ¶ [0198], Next, at acts 514, 516, and 518, synthetic phase is generated from a linear combination of spherical harmonic basis functions to generate the target complex-valued volume x 520. In some embodiments, coefficients α.sub.i of N spherical harmonic basis functions Y.sub.i are sampled, at 514, at random to generate a phase image, at 516, according to: θ=Σ.sub.i=1.sup.Nα.sub.iY.sub.i. In turn, the complex-valued target vole 520 may be given by: x=x″ (r)e.sup.iθ. In some embodiments, the number of spherical harmonics is selected by the user—the greater the number, the more complex the resulting phase. In some embodiments, the range of values for each spherical harmonic coefficient α.sub.i may be set by user, for example, empirically. Note: Examiner is interpreting the synthetic phase as simulating the phase contrast dynamic). However, Schlemper fail(s) to teach generating a set of training image pairs based on the plurality of simulated images; training an artifact removal neural network using the set of training image pairs; and generating an output of the artifact removal neural network based on an inputted acquired image, wherein generating the plurality of simulated images comprises generating images from RGB images, simulating motion in the simulated images. Wu, working in the same field of endeavor, teaches: generating a set of training image pairs based on the plurality of simulated images (See Wu, [Pg. 2. ln. 3-5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); training an artifact removal neural network using the set of training image pairs (See Wu, [Pg. 4, ln. 4–5], First, the stage-I network was trained independently with image patch pairs of the clean and simulated motion images); and generating an output of the artifact removal neural network based on an inputted acquired image, wherein generating the plurality of simulated images comprises generating images from RGB images (See Wu, [Pg. 3, ln. 5–6, 2.3. Model architecture], In this study, a two-stage deep CNN model was developed to reduce motion artifacts of the liver DCE-MRI images), simulating motion in the simulated images (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference to generating a set of training image pairs based on the plurality of simulated images; training an artifact removal neural network using the set of training image pairs; and generating an output of the artifact removal neural network based on an inputted acquired image, wherein generating the plurality of simulated images comprises generating images from RGB images, simulating motion in the simulated images based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper to obtain the invention as specified in claim 1. Regarding claim 2, Schlemper in view of Wu teaches the method of claim 1, [wherein each of the set of training image pairs comprise an input image and a target image]. However, Schlemper fail(s) to teach wherein each of the set of training image pairs comprise an input image and a target image. Wu, working in the same field of endeavor, teaches: wherein each of the set of training image pairs comprise an input image and a target image (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference wherein each of the set of training image pairs comprise an input image and a target image based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper to obtain the invention as specified in claim 2. Regarding claim 5, Schlemper teaches the method of claim 1, wherein the plurality of simulated images are simulated magnetic resonance (MR) images (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data, but also reduces the demand to obtain greater amounts of real-world MRI images needed for training, which can be expensive, time-consuming, and impractical). Regarding claim 6, Schlemper teaches the method of claim 1, wherein the inputted acquired image is an MR image acquired by an MRI scanner (See Schlemper, ¶ [0192], As shown in FIGS. 5A-5C, process 500 begins by accessing a reference magnitude MR volume 502. The MR volume 502 may comprise one or multiple images. Each of the image(s) may represent an anatomical slice of a subject being imaged. The MR volume 502 may include one or more magnitude images obtained by a clinical MRI system). Regarding claim 7, Schlemper teaches the method of claim 6, wherein the MR image acquired by the MRI scanner is undersampled and comprises undersampling oriented artifacts (See Schlemper, ¶ [0308], To better train the neural network model, it may be desirable to include synthetic noise in the synthetic training data (e.g., to simulate non-ideal MR imaging conditions). In act 1014, Gaussian noise may be sampled in act 1014. The Gaussian noise may be selected to match the volume size of the loaded volume. Alternatively or additionally, in some embodiments, noise may be added to the reference volume and the moving volume by undersampling a percentage of the MR data in k-space. In act 1016, the Gaussian noise may be added to the reference volume and the moving volume to form the synthetic training data pair for use by the neural network model). Regarding claim 8, Schlemper in view of Wu teaches The method of claim 1, [further comprising testing the trained artifact removal neural network with test image pairs generated from the simulated images]. However, Schlemper fail(s) to further comprising testing the trained artifact removal neural network with test image pairs generated from the simulated image. Wu, working in the same field of endeavor, teaches: further comprising testing the trained artifact removal neural network with test image pairs generated from the simulated image (See Wu, [Pg. 5, ln. 51–53, 3. Results], We compared the model performance of artifact reduction by using the stage-I model alone, stage-I + PL model, and the two-stage full model with 312 testing images with simulated motion artifacts (Table 1)); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference further comprising testing the trained artifact removal neural network with test image pairs generated from the simulated image based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper to obtain the invention as specified in claim 8. Regarding claim 9, Schlemper in view of Wu teaches the method of claim 1, [wherein the artifact removal neural network is a multi-phase network or multi-echo network]. However, Schlemper fail(s) to further wherein the artifact removal neural network is a multi-phase network or multi-echo network. Wu, working in the same field of endeavor, teaches: wherein the artifact removal neural network is a multi-phase network or multi-echo network (See Wu, [Pg. 2, 53–56, 2.1. Liver MRI dataset], A multi-phase DCE imaging protocol covering the whole liver volume was acquired before and after intravenous contrast administration (0.1 mL/kg Eovist® Gadoxetate Disodium) with an injection rate of 2 mL/sec. Note: the neural network is a multi-phase network because it uses multi-phase data); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference wherein the artifact removal neural network is a multi-phase network or multi-echo network based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper to obtain the invention as specified in claim 9. Regarding claim 10, Schlemper teaches wherein the artifact removal neural network is a single-phase network or single-echo network (See Schlemper, ¶ [0234], For example, the reference volume 602 may include multiple MR images, each of which corresponds to a different volumetric slice of the imaged patient (e.g., the multiple MR images may include multiple sagittal slices, multiple axial slices, or multiple coronal slices) obtained from a first instance of an MR imaging protocol (e.g., a series of one or more pulse sequences for imaging the patient). Note: Examiner is interpreting the acquisition as during a single first instance/phase). Regarding claim 15, Schlemper in view of Wu teaches a method for creating simulated magnetic resonance (MR) images for training a model to reduce an amount of artifact in acquired MR images (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data, but also reduces the demand to obtain greater amounts of real-world MRI images needed for training, which can be expensive, time-consuming, and impractical. ¶ [0126], Returning to FIG. 2C, in some embodiments, neural network 238 may be configured to suppress artefacts in the image domain), the method comprising: obtaining a set of reference images (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data, but also reduces the demand to obtain greater amounts of real-world MRI images needed for training, which can be expensive, time-consuming, and impractical); [simulating a motion phase in one or more of the reference images]; simulating a contrast phase in one or more of the reference images (See Schempler, ¶ [0197], In some embodiments, the histogram augmentation function I(r) generated at 510 may be used to change the intensity variations in regions of the image to simulate various effects, including, but not limited to the effect of RF coil correlation and/or to provide different contrasts that may occur in multi-echo pulse sequences. Note: Examiner is interpreting the histogram augmentation as the simulating contrast); simulating phase contrast dynamics in one or more of the reference images (See Schempler, ¶ [0198], Next, at acts 514, 516, and 518, synthetic phase is generated from a linear combination of spherical harmonic basis functions to generate the target complex-valued volume x 520. In some embodiments, coefficients α.sub.i of N spherical harmonic basis functions Y.sub.i are sampled, at 514, at random to generate a phase image, at 516, according to: θ=Σ.sub.i=1.sup.Nα.sub.iY.sub.i. In turn, the complex-valued target vole 520 may be given by: x=x″ (r)e.sup.iθ. In some embodiments, the number of spherical harmonics is selected by the user—the greater the number, the more complex the resulting phase. In some embodiments, the range of values for each spherical harmonic coefficient α.sub.i may be set by user, for example, empirically. Note: Examiner is interpreting the synthetic phase as simulating the phase contrast dynamic); generating simulated MR images of the reference images based on simulated [motion phases], contrast phases (See Schempler, ¶ [0197], In some embodiments, the histogram augmentation function I(r) generated at 510 may be used to change the intensity variations in regions of the image to simulate various effects, including, but not limited to the effect of RF coil correlation and/or to provide different contrasts that may occur in multi-echo pulse sequences. Note: Examiner is interpreting the histogram augmentation as the simulating contrast), and phase contrast dynamics (See Schempler, ¶ [0198], Next, at acts 514, 516, and 518, synthetic phase is generated from a linear combination of spherical harmonic basis functions to generate the target complex-valued volume x 520. In some embodiments, coefficients α.sub.i of N spherical harmonic basis functions Y.sub.i are sampled, at 514, at random to generate a phase image, at 516, according to: θ=Σ.sub.i=1.sup.Nα.sub.iY.sub.i. In turn, the complex-valued target vole 520 may be given by: x=x″ (r)e.sup.iθ. In some embodiments, the number of spherical harmonics is selected by the user—the greater the number, the more complex the resulting phase. In some embodiments, the range of values for each spherical harmonic coefficient α.sub.i may be set by user, for example, empirically. Note: Examiner is interpreting the synthetic phase as simulating the phase contrast dynamic). However, Schlemper fail(s) to further simulating a motion phase in one or more of the reference images; motion phases. Wu, working in the same field of endeavor, teaches: simulating a motion phase in one or more of the reference images (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); motion phases (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference simulating a motion phase in one or more of the reference images; motion phases based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper to obtain the invention as specified in claim 15. Regarding claim 16, Schlemper teaches the method of claim 15, wherein the reference images have a higher resolution than the acquired MR images (See Schlemper, ¶ [0216], Additionally or alternatively, another way to generate a training dataset is to use source images of higher quality x.sub.o, such as those obtained from low-field scanners, but using more data samples. The sensor data can be obtained directly by collecting the scanner measurements y.sub.o. The higher quality data x.sub.o and input data x are related by a mask in the sensor domain, i.e. y=M.Math.y.sub.o). Claim(s) 11–14 and 17–20 are rejected under 35 U.S.C. 103 as being unpatentable over Schlemper et al. (US 20200294287 A1, hereafter, "Schlemper") in view of Wu et al. (See NPL attached, "Image-based motion artifact reduction on liver dynamic contrast enhanced MRI", hereafter, "Wu") further in view of Huang et al. (US 20200065969 A1, hereafter, "Huang"). Regarding claim 11, Schlemper teaches an image processing system comprising: a processor communicably coupled to a non-transitory memory storing a neural network, the memory including instructions that when executed cause the processor (See Schlemper, [FIG. 26], 2602 PROCESSOR, 2604 MEMORY) to: receive a plurality of simulated MR images (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data, but also reduces the demand to obtain greater amounts of real-world MRI images needed for training, which can be expensive, time-consuming, and impractical), each simulated MR image generated by [simulating motion], contrast (See Schempler, ¶ [0197], In some embodiments, the histogram augmentation function I(r) generated at 510 may be used to change the intensity variations in regions of the image to simulate various effects, including, but not limited to the effect of RF coil correlation and/or to provide different contrasts that may occur in multi-echo pulse sequences. Note: Examiner is interpreting the histogram augmentation as the simulating contrast), and phase contrast dynamics (See Schempler, ¶ [0198], Next, at acts 514, 516, and 518, synthetic phase is generated from a linear combination of spherical harmonic basis functions to generate the target complex-valued volume x 520. In some embodiments, coefficients α.sub.i of N spherical harmonic basis functions Y.sub.i are sampled, at 514, at random to generate a phase image, at 516, according to: θ=Σ.sub.i=1.sup.Nα.sub.iY.sub.i. In turn, the complex-valued target vole 520 may be given by: x=x″ (r)e.sup.iθ. In some embodiments, the number of spherical harmonics is selected by the user—the greater the number, the more complex the resulting phase. In some embodiments, the range of values for each spherical harmonic coefficient α.sub.i may be set by user, for example, empirically. Note: Examiner is interpreting the synthetic phase as simulating the phase contrast dynamic); [for each simulated MR image, generate an undersampled version of the simulated MR image]; create a respective plurality of image pairs, each image pair including a simulated MR image as a target, ground truth image (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data. ¶ [0199], Next, after the target image 520 is generated. Note: the target image is being interpreted as the simulated MR target and ground truth), and [a corresponding undersampled version of the simulated MR image as an input image; train the neural network using the image pairs; deploy the trained neural network to generate artifact-reduced images from MR images acquired from a scanned subject]; and display the artifact-reduced images on a display device of the image processing system (See Schlemper, ¶ [0138], The image(s) generated at act 260 may then be saved, sent to another system, displayed, or output in any other suitable way). However, Schlemper fail(s) to further simulating motion; train the neural network using the image pairs; deploy the trained neural network to generate artifact-reduced images from MR images acquired from a scanned subject. Wu, working in the same field of endeavor, teaches: simulating motion; (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); train the neural network using the image pairs (See Wu, [Pg. 4, ln. 4–5], First, the stage-I network was trained independently with image patch pairs of the clean and simulated motion images); deploy the trained neural network to generate artifact-reduced images from MR images acquired from a scanned subject (See Wu, [Pg. 4, ln. 4–5], First, the stage-I network was trained independently with image patch pairs of the clean and simulated motion images). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference simulating a motion phase in one or more of the reference images; motion phases based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). However, Schlemper and Wu fail(s) to teach for each simulated MR image, generate an undersampled version of the simulated MR image; a corresponding undersampled version of the simulated MR image as an input image. Huang, working in the same field of endeavor, teaches: for each simulated MR image, generate an undersampled version of the simulated MR image (See Huang, ¶ [0038], The simulation is of a full sampling. To provide raw data representing less than a full sampling (e.g., MR fast imaging), an undersampled scan may be simulated, such as simulating based on an under-sampling mask defining a rate of undersampling and line order. Alternatively, the k-space data from the full sampling is processed, removing some of the raw data (e.g., removing k-space or sinogram data) to simulate undersampling scan. Under-sampling is performed retrospectively by keeping a subset of the full-sampled data); a corresponding undersampled version of the simulated MR image as an input image (See Huang, ¶ [0038], The simulation is of a full sampling. To provide raw data representing less than a full sampling (e.g., MR fast imaging), an undersampled scan may be simulated, such as simulating based on an under-sampling mask defining a rate of undersampling and line order. Alternatively, the k-space data from the full sampling is processed, removing some of the raw data (e.g., removing k-space or sinogram data) to simulate undersampling scan. Under-sampling is performed retrospectively by keeping a subset of the full-sampled data). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference to for each simulated MR image, generate an undersampled version of the simulated MR image; a corresponding undersampled version of the simulated MR image as an input image based on the method of Huang’s reference. The suggestion/motivation would have been to more easily collect diverse samples (See Huang, ¶ [0008]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu and Huang with Schlemper to obtain the invention as specified in claim 11. Regarding claim 12, Schlemper in view of Wu and further in view of Huang teaches the image processing system of claim 11, wherein the MR images acquired from the scanned subject are undersampled (See Schlemper, ¶ [0092], configured to reconstruct one or more images from the output of the neural network 210 (e.g., including when the MR data is undersampled)) [multi-phase/echo images]. However, Schlemper fail(s) to teach multi-phase/echo images. Wu, working in the same field of endeavor, teaches: multi-phase/echo images (See Wu, [Pg. 2, ln. 52-55, 2.1. Liver MRI dataset], A multi-phase DCE imaging protocol covering the whole liver volume was acquired before and after intravenous contrast administration (0.1 mL/kg Eovist® Gadoxetate Disodium) with an injection rate of 2 mL/sec. [Pg. 2, ln. 3-5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference multi-phase/echo images based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper and Huang to obtain the invention as specified in claim 12. Regarding claim 13, Schlemper teaches the image processing system of claim 11, wherein each of the image pairs are 1-1 matched (See Schlemper, ¶ [0129], In the above expression for loss, the generator G is the filtering network and the discriminator D is trained to best differentiate between images filtered with the network G and original noise-free images (ground truth). Note: Examiner is interpreting the original and ground truth as being 1-1 since they are the same image). Regarding claim 14, Schlemper in view of Wu and further in view of Huang teaches the image processing system of claim 11, [wherein generating the undersampled version of the simulated MR image comprises undersampling the simulated MR image in k-space]. However, Schlemper and Wu fail(s) to teach wherein generating the undersampled version of the simulated MR image comprises undersampling the simulated MR image in k-space. Huang, working in the same field of endeavor, teaches: wherein generating the undersampled version of the simulated MR image comprises undersampling the simulated MR image in k-space (See Huang, ¶ [0038], The simulation is of a full sampling. To provide raw data representing less than a full sampling (e.g., MR fast imaging), an undersampled scan may be simulated, such as simulating based on an under-sampling mask defining a rate of undersampling and line order. Alternatively, the k-space data from the full sampling is processed, removing some of the raw data (e.g., removing k-space or sinogram data) to simulate undersampling scan. Under-sampling is performed retrospectively by keeping a subset of the full-sampled data); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference wherein generating the undersampled version of the simulated MR image comprises undersampling the simulated MR image in k-space based on the method of Huang’s reference. The suggestion/motivation would have been to more easily collect diverse samples (See Huang, ¶ [0008]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Huang with Schlemper and Wu to obtain the invention as specified in claim 11. Regarding claim 17, Schlemper in view of Wu and further in view of Huang teaches the method of claim 15, [wherein the simulated MR images are undersampled in k-space to generate lower resolution versions of the simulated MR images]. However, Schlemper and Wu fail(s) to teach wherein the simulated MR images are undersampled in k-space to generate lower resolution versions of the simulated MR images. Huang, working in the same field of endeavor, teaches: wherein the simulated MR images are undersampled in k-space to generate lower resolution versions of the simulated MR images (See Huang, ¶ [0038], The simulation is of a full sampling. To provide raw data representing less than a full sampling (e.g., MR fast imaging), an undersampled scan may be simulated, such as simulating based on an under-sampling mask defining a rate of undersampling and line order. Alternatively, the k-space data from the full sampling is processed, removing some of the raw data (e.g., removing k-space or sinogram data) to simulate undersampling scan. Under-sampling is performed retrospectively by keeping a subset of the full-sampled data). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference wherein the simulated MR images are undersampled in k-space to generate lower resolution versions of the simulated MR images based on the method of Huang’s reference. The suggestion/motivation would have been to more easily collect diverse samples (See Huang, ¶ [0008]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Huang with Schlemper and Wu to obtain the invention as specified in claim 17. Regarding claim 18, Schlemper in view of Wu and further in view of Huang teaches the method of claim 17, [further comprising generating training image pairs], wherein the simulated MR images are targets (See Schlemper, ¶ [0191], Repeating process 500 multiple times by starting from the same MR volume, but varying the process parameters (e.g., transformations applied to the image at acts 508, 510, and 512) enables the generation of multiple training data pairs from a single MR volume, which is a type of data augmentation that not only increases the diversity and coverage of the training data. ¶ [0199], Next, after the target image 520 is generated. Note: the target image is being interpreted as the simulated MR target and ground truth) and [the lower resolution versions of the simulated MR images are inputs and wherein the inputs and targets are in x-y-phase format]. However, Schlemper fail(s) to teach further comprising generating training image pairs; wherein the inputs and targets are in x-y-phase format. Wu, working in the same field of endeavor, teaches: further comprising generating training image pairs (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations); wherein the inputs and targets are in x-y-phase format (See Wu, [Pg. 2. ln. 3–5, 2.2. Motion artifact simulation], For model training process, simulated motion artifacts were generated based on the ‘ground truth’ clean image (Fig. 1a) through image space and K-space image transformations. Note: Examiner is interpreting simulating using the image domain as the x-y and the phase domain as creating an x-y phase format). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference further comprising generating training image pairs; wherein the inputs and targets are in x-y-phase format based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). However, Schlemper and Wu fail(s) to teach the lower resolution versions of the simulated MR images are inputs. Huang, working in the same field of endeavor, teaches: the target image is a high-quality simulated image (See Huang, ¶ [0038], The simulation is of a full sampling. To provide raw data representing less than a full sampling (e.g., MR fast imaging), an undersampled scan may be simulated, such as simulating based on an under-sampling mask defining a rate of undersampling and line order. Alternatively, the k-space data from the full sampling is processed, removing some of the raw data (e.g., removing k-space or sinogram data) to simulate undersampling scan. Under-sampling is performed retrospectively by keeping a subset of the full-sampled data). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference the target image is a high-quality simulated image based on the method of Huang’s reference. The suggestion/motivation would have been to more easily collect diverse samples (See Huang, ¶ [0008]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu and Huang with Schlemper to obtain the invention as specified in claim 18. Regarding claim 19, Schlemper in view of Wu and further in view of Huang teaches the method of claim 18, wherein the model is a neural network model (See Schlemper, ¶ [0126], Returning to FIG. 2C, in some embodiments, neural network 238 may be configured to suppress artefacts in the image domain), and [the method further comprises training the neural network model on training data including the training image pairs]. However, Schlemper and Huang fail(s) to further the method further comprises training the neural network model on training data including the training image pairs. Wu, working in the same field of endeavor, teaches: the method further comprises training the neural network model on training data including the training image pairs (See Wu, [Pg. 4, ln. 4–5], First, the stage-I network was trained independently with image patch pairs of the clean and simulated motion images); Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference the method further comprises training the neural network model on training data including the training image pairs based on the method of Wu’s reference. The suggestion/motivation would have been to produce high quality motion reduced images with little perceptual difference from the target image (See Wu, [Table 1]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Wu with Schlemper and Huang to obtain the invention as specified in claim 19. Regarding claim 20, Schlemper teaches the method of claim 19, further comprising deploying the neural network model to generate an artifact-reduced version of an inputted MR image (See Schlemper, ¶ [0126], Returning to FIG. 2C, in some embodiments, neural network 238 may be configured to suppress artefacts in the image domain). Claim(s) 3 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Schlemper et al. (US 20200294287 A1, hereafter, "Schlemper") in view of Wu et al. (See NPL attached, "Image-based motion artifact reduction on liver dynamic contrast enhanced MRI", hereafter, "Wu") further in view of Huang et al. (US 20200065969 A1, hereafter, "Huang") and further in view of Braun et al. (US 10698063 B2, hereafter, "Braun"). Regarding claim 3, Schlemper teaches the method of claim 2, [wherein the input image is an undersampled version of the simulated image and the target image is a high-quality simulated image]. However, Schlemper and Wu fail(s) to teach wherein the input image is an undersampled version of the simulated image. Huang, working in the same field of endeavor, teaches: wherein the input image is an undersampled version of the simulated image (See Huang, ¶ [0038], The simulation is of a full sampling. To provide raw data representing less than a full sampling (e.g., MR fast imaging), an undersampled scan may be simulated, such as simulating based on an under-sampling mask defining a rate of undersampling and line order. Alternatively, the k-space data from the full sampling is processed, removing some of the raw data (e.g., removing k-space or sinogram data) to simulate undersampling scan. Under-sampling is performed retrospectively by keeping a subset of the full-sampled data). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference to wherein the input image is an undersampled version of the simulated image based on the method of Huang’s reference. The suggestion/motivation would have been to more easily collect diverse samples (See Huang, ¶ [0008]). However, Schlemper, Wu and Huang fail(s) to teach the target image is a high-quality simulated image. Braun, working in the same field of endeavor, teaches: the target image is a high-quality simulated image (See Braun, [Col. 7, ln. 19–22 – 26–28], In one embodiment, the training data (e.g. first MR data 405 and second MR data 403) is acquired directly from MR scanners, solely from simulation, or from a combination of the two, ..., For simulation data, digital phantoms are used, and MR acquisitions are simulated with and without motion artifact sources. Note: the simulated image without the motion artifacts is being interpreted as high quality). Thus, it would have been obvious to one of ordinary skills in the art before the effective filing date of the claimed invention to modify Schlemper’s reference to the target image is a high-quality simulated image based on the method of Braun’s reference. The suggestion/motivation would have been to produce high quality reconstructions based on high quality targets (See Braun, [Col. 1, ln. 20–45]). Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Braun and Huang with Schlemper and Wu to obtain the invention as specified in claim 3. Regarding claim 4, Schlemper in view of Wu further in view of Huang and further in view of Braun teaches the method of claim 3, wherein the input image and the target image of a given training image pair are 1-1 matched (See Schlemper, ¶ [0129], In the above expression for loss, the generator G is the filtering network and the discriminator D is trained to best differentiate between images filtered with the network G and original noise-free images (ground truth). Note: Examiner is interpreting the original and ground truth as being 1-1 since they are the same image). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lee et al. (See NPL attached, “Deep Learning in MR Motion Correction”) teaches in this paper, we review previous studies for motion correction methods based on deep learning. In particular, we pay attention to the methods used for network training. We then propose a new motion-simulation tool, called view2Dmotion, which can generate massive training datasets for motion simulation. Zhou et al. (US 20190104940 A1) teaches a method and apparatus is provided that uses a deep learning (DL) network to reduce noise and artifacts in reconstructed medical images, such as images generated using computed tomography, positron emission tomography, and magnetic resonance imaging. The DL network can operate either on pre-reconstruction data or on a reconstructed image. The DL network can be an artificial neural network or a convolutional neural network (e.g., using a three-channel volumetric kernel architecture). Different neural networks can be trained depending on the noise level, scanning protocol, or the anatomic, diagnostic or clinical objective of the reconstructed image (e.g., by partitioning the training data into noise-level range and training respective DL networks for each range). Further, the DL networks can be trained to mitigate artifacts, such as the cone-beam artifact. Lebel et al. (US 10635943 B1) teaches methods and systems are provided for reducing noise in medical images with deep neural networks. In one embodiment, a method for training a neural network comprises transforming each of a plurality of initial image data sets not acquired by a medical imaging modality into a target image data set, wherein each target image data set is in a format specific to the medical imaging modality, corrupting each target image data set to generate a corrupted image data set, and training the neural network to map each corrupted image data set to the corresponding target image data set. In this way, the high-resolution of digital non-medical photographs or images can be leveraged for the enhancement or correction of medical images, and the trained neural network can be used to reduce noise and image artifacts in medical images acquired by the medical imaging modality. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DION J SATCHER whose telephone number is (703)756-5849. The examiner can normally be reached Monday - Thursday 5:30 am - 2:30 pm, Friday 5:30 am - 9:30 am PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at (571) 272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DION J SATCHER/ Patent Examiner, Art Unit 2676 /Henok Shiferaw/ Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Oct 24, 2023
Application Filed
Jan 09, 2026
Non-Final Rejection — §101, §103
Mar 04, 2026
Applicant Interview (Telephonic)
Mar 04, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586218
MOTION ESTIMATION WITH ANATOMICAL INTEGRITY
2y 5m to grant Granted Mar 24, 2026
Patent 12579787
INSTRUMENT RECOGNITION METHOD BASED ON IMPROVED U2 NETWORK
2y 5m to grant Granted Mar 17, 2026
Patent 12573066
Depth Estimation Using a Single Near-Infrared Camera and Dot Illuminator
2y 5m to grant Granted Mar 10, 2026
Patent 12555263
SYSTEMS AND METHODS FOR TWO-STAGE OBJECTION DETECTION
2y 5m to grant Granted Feb 17, 2026
Patent 12548140
DETERMINING PROCESS DEVIATIONS THROUGH VIDEO ANALYSIS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+14.2%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 39 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month