Prosecution Insights
Last updated: April 18, 2026
Application No. 18/316,666

DEEP LEARNING BASED MEDICAL IMAGING SYSTEM AND METHOD

Final Rejection §101§103
Filed
May 12, 2023
Examiner
CRUZ, IRIANA
Art Unit
2681
Tech Center
2600 — Communications
Assignee
GE Precision Healthcare LLC
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
91%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
590 granted / 726 resolved
+19.3% vs TC avg
Moderate +9% lift
Without
With
+9.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
48 currently pending
Career history
774
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
53.9%
+13.9% vs TC avg
§102
24.2%
-15.8% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 726 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 01/16/2026 have been fully considered but they are not persuasive. Applicant argues the prior 101 rejection stating that the claims are directed to a specific improvement in medical imaging and not a business method or mental process. The independent claims directly correlate to example 47 of the July 2024 subject matter eligibility examples regarding training a model, use the model to process (reconstruct) data, use the processed data to generate data (image labeled medical). The claim is found to be ineligible. The claimed elements represent mere instructions to implement an abstract idea on a computer and insignificant extra-solution activity, which do not provide an inventive concept. This outcome is further supported in view of Ex Parte Desjardins (PTAB ARP September 26, 2025) describing that training and using a machine learning model may be considered eligible if the claims specifically state or reflect the improvement of the invention described by the specification. Applicant states the improvement is found in paragraph [0056] providing superior image quality, faster scans for the patient and the faster scan also results in reduced motion artifacts. However, the high level generic recitation claim language of training a model, use the model to process (reconstruct) data, use the processed data to generate data (image labeled medical) does not reflect the paragraph [0056] improvement or any other improvement to the technology. It is suggested to further amend the claims with language that is either particular to said improvement or has a clear reflection of said improvement. However, in view of the current state of the claims the 101 rejection is upheld. Applicant argues the 103 rejection stating that Gerdes’837 perturbation signal is after the model has already been trained and therefore not for training. The argument is a piecemeal analysis of the combination of arts. Gerdes’837 discloses in paragraph [0068] the ability of using the perturbation signal as an input for the model. Paragraph [0045] describes the perturbation signal amplitude to be within a noise band of the test signal. Dey’378 specifically discloses, paragraph [0102] and figure 1, the training of the network is performed by a training data set including a first image generated by the medical imaging device and a second image generated by combining the first image with a noise. The combination utilizes Gerdes’837 perturbation signal amplitude of noise with Dey’378 second image training data including noise. The combination avoids anomaly alerts that may compromise the system otherwise. The rejection is upheld. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-21 and 24 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, and 12 recite a processing system that trains a deep learning (DL) network using input image training data including raw image and at least a perturbation signal, uses the trained DL network to determine a reconstructed image data from the image data of the subject and generated a medical image of the subject based on the reconstructed image data. The claims “train”, “determine” and “generate” disclose mathematical calculations (e.g., loss function used to minimize the difference or error between the plurality of DL network outputs), an abstract idea under its broadest reasonable interpretation in light of the specification encompasses a mathematical calculation, then it falls within the “Mathematical Concepts” grouping of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application because the claimed processing system and training deep learning network are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component and thus are insignificant extra-solution activity. See MPEP 2106.05(g). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(f). The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claim limitations are claimed at a high level of generality, the insignificant extra-solution activity identified above, training a deep learning network using input images to determine reconstructed image data from the image data of the subject and generate an imaged based on that, is recognized by the courts as well-understood, routine, and conventional activity when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. The claims are not patent eligible. Claims 2 and 13 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 2 and 13 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the at least one medical imaging device comprises a magnetic resonance imaging (MRI) system, an X-ray imaging system, a computed tomography (CT) imaging system, or an ultrasound imaging system”, which is merely elaborating on the abstract idea, by further specifying an additional components recited at a high-level of generality, therefore, does not amount to significantly more than the abstract idea. Claims 3 and 14 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 3 and 14 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the raw image data is under-sampled image data”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 4 and 15 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 4 and 15 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the at least one perturbation signal is a small amplitude signal relative to the raw image data”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 5 and 16 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 5 and 16 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the processing system is programmed to update parameters of the DL network based on a loss function, wherein the loss function uses a difference between the at least one perturbation signal and at least one restored perturbation signal as a loss metric”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 6 and 17 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 6 and 17 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the loss function is a sum squared error (L2-norms) function or a structural similarity (SSIM) function”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 7 and 18 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 7 and 18 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the at least one restored perturbation signal is determined based on a difference between data of a plurality of reconstructed images”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 8 and 19 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 8 and 19 recites the same abstract idea of claim 1. The claim recites the additional limitation of “a first reconstructed image of the plurality of reconstructed images is determined by executing the DL network, wherein an input signal to the DL network includes the raw image data”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 9 and 20 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 9 and 20 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein a second reconstructed image of the plurality of reconstructed images is determined by executing the DL network with a combination of the raw image data, noise data and the at least one perturbation signal as an input signal”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 10 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 10 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein the plurality of reconstructed images is generated by executing the DL network a plurality of times with the raw image data, noise data, a plurality of perturbation signals or combinations thereof”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 11 and 21 are dependent on claims 1 and 12 and includes all the limitations of claims 1 and 12. Therefore, claims 11 and 21 recites the same abstract idea of claim 1. The claim recites the additional limitation of “wherein a second reconstructed image of the plurality of reconstructed images is determined by executing the DL network with a combination of the raw image data, noise data and the at least one perturbation signal as an input signal”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claims 24 is dependent on claim 1 and includes all the limitations of claim 1. Therefore, claim 24 recites the same abstract idea of claim 1. The claim recites the additional limitation of “medical imaging system of claim 1, wherein the at least one perturbation signal is not a noise signal”, which is merely elaborating on the abstract idea, by further specifying an additional mathematical calculation, therefore, does not amount to significantly more than the abstract idea. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-21, 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Dey et al. (US 2022/0107378 A1) in view of Gerdes et al. (US 2023/0376837 A1). With respect to Claim 1, Dey’378 shows a medical imaging system comprising: at least one medical imaging device providing image data of a subject (paragraph [0098] magnetic resonance imaging (MRI) systems); a processing system programmed to: train a deep learning (DL) network using input image training data, wherein the input image training data includes raw image data and at least one [perturbation] signal (paragraph [0102] training a neural network using pair of images including a first image/raw generated using data collected by a medical imaging device and a second image generated by combining the first image with a noise image (e.g., a noise-corrupted image), paragraph [0122] shows the neural network being a deep neural network); use the trained DL network to determine reconstructed image data from the image data of the subject (paragraph [0140] image reconstruction module 210 including modules comprised of trained neural networks); and generate a medical image of the subject based on the reconstructed image data (paragraph [0100] shows producing a sharper medical image, figure 5 step 508). Dey’378 does not specifically show train a deep learning (DL) network using input image training data, wherein the input image training data includes at least one perturbation signal. Gerdes’837 shows show train a deep learning (DL) network using input image training data, wherein the input image training data includes at least one perturbation signal (paragraphs [0068] placing a perturbation (such as oscillating perturbation d 145) on a ML model, sinusoidal perturbation is artificially generated to have known, pre-determined characteristics of amplitude, period, and waveform, the perturbation is small in amplitude relative to the amplitude of the test signal input to which the perturbation is applied). At the time of the invention, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to modify Dey’378 to include train a deep learning (DL) network using input image training data, wherein the input image training data includes at least one perturbation signal method taught by Gerdes’837. The suggestion/motivation for doing so would have been to improve the system’s ability to be able to avoid anomaly alerts that may compromise metrics for following or spillover. With respect to Claim 2, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 1, wherein the at least one medical imaging device comprises a magnetic resonance imaging (MRI) system, an X-ray imaging system, a computed tomography (CT) imaging system, or an ultrasound imaging system (in Dey’378: paragraph [0098] magnetic resonance imaging (MRI) systems). With respect to Claim 3, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 1, wherein the raw image data is under-sampled image data (in Dey’378 paragraph [0099] raw image data includes pixel-dependent noise). With respect to Claim 4, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 1, wherein the at least one perturbation signal is a small amplitude signal relative to the raw image data (in Gerdes’837: paragraph [0068] the perturbation is small in amplitude relative to the amplitude of the test signal input to which the perturbation is applied). With respect to Claim 5, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 1, wherein the processing system is programmed to update parameters of the DL network based on a loss function, wherein the loss function uses a difference between the at least one perturbation signal and at least one restored perturbation signal as a loss metric (In Dey’378: paragraph [0127] neural network 110 may be trained in accordance with a loss function 114, the loss function 114 may be determined based on the denoised image 112 and the medical image of the subject 102. The loss function 114 may then be used to train the neural network 110 (e.g., to update the weights of the neural network 110)). With respect to Claim 6, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 5, wherein the loss function is a sum squared error (L2-norms) function or a structural similarity (SSIM) function (In Dey’378: paragraph [0162] ). With respect to Claim 7, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 5, wherein the at least one restored perturbation signal is determined based on a difference between data of a plurality of reconstructed images (paragraphs [0127], [0140] and [0219] loss function 114 may be a mean-squared error (MSE) loss function calculated by taking the mean of squared differences between the denoised image 112 and the medical image of the subject 102). With respect to Claim 8, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 7, wherein a first reconstructed image of the plurality of reconstructed images is determined by executing the DL network, wherein an input signal to the DL network includes the raw image data (in Dey’378: paragraph [0218] MR image reconstruction generated by neural network model 2610, paragraph [0160] implemented by deep learning model). With respect to Claim 9, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 7, wherein a second reconstructed image of the plurality of reconstructed images is determined by executing the DL network with a combination of the raw image data, noise data and the at least one perturbation signal as an input signal (in Dey’378: paragraph [0218] applying the convolutional neural network block 2650 to x.sub.i, to obtain a second result, and subtracting from x.sub.i a linear combination of the first result and the second result, where the linear combination is calculated using the block-specific weight λ.sub.i). With respect to Claim 10, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 7, wherein the plurality of reconstructed images is generated by executing the DL network a plurality of times with the raw image data, noise data, a plurality of perturbation signals or combinations thereof (in Dey’378: paragraph [0218] applying the convolutional neural network block 2650 to x.sub.i, to obtain a second result, and subtracting from x.sub.i a linear combination of the first result and the second result, where the linear combination is calculated using the block-specific weight λ.sub.i ). With respect to Claim 11, the combination Dey’378 and Gerdes’837 shows the medical imaging system of claim 1, wherein the reconstructed image data includes a reconstructed image or k-space data thereof (in Dey’378: he reconstruction procedure is configured to generate MR images in the image domain using MR data collected in the spatial frequency domain (e.g., in k-space). In some embodiments, the image reconstruction module 1810 may be configured to generate the noisy medical image 1820 using a reconstruction procedure including compressed sensing). With respect to Claim 12, Dey’378 shows a method for imaging a subject (paragraph [0098] magnetic resonance imaging (MRI) systems) comprising: training a deep learning (DL) network using input image training data, wherein the input image training data includes raw image data and at least one [perturbation] signal (paragraph [0102] training a neural network using pair of images including a first image/raw generated using data collected by a medical imaging device and a second image generated by combining the first image with a noise image (e.g., a noise-corrupted image), paragraph [0122] shows the neural network being a deep neural network); acquiring image data of the subject with a medical imaging device (paragraph [0005] obtaining a noisy MR image of a subject ); providing the image data of the subject as an input to the trained DL network (paragraph [0140] image reconstruction module 210 including modules comprised of trained neural networks); using the DL network to generate a medical image of the subject based on the acquired image data (paragraph [0100] shows producing a sharper medical image, figure 5 step 508). Dey’378 does not specifically show training a deep learning (DL) network using input image training data, wherein the input image training data includes at least one perturbation signal. Gerdes’837 shows training a deep learning (DL) network using input image training data, wherein the input image training data includes at least one perturbation signal (paragraphs [0068] placing a perturbation (such as oscillating perturbation d 145) on a ML model, sinusoidal perturbation is artificially generated to have known, pre-determined characteristics of amplitude, period, and waveform, the perturbation is small in amplitude relative to the amplitude of the test signal input to which the perturbation is applied). At the time of the invention, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to modify Dey’378 to include train a deep learning (DL) network using input image training data, wherein the input image training data includes at least one perturbation signal method taught by Gerdes’837. The suggestion/motivation for doing so would have been to improve the system’s ability to be able to avoid anomaly alerts that may compromise metrics for following or spillover. With respect to Claim 13, rejection analogous to those presented for claim 2, are applicable. With respect to Claim 14, rejection analogous to those presented for claim 3, are applicable. With respect to Claim 15, rejection analogous to those presented for claim 4, are applicable. With respect to Claim 16, rejection analogous to those presented for claim 5, are applicable. With respect to Claim 17, rejection analogous to those presented for claim 6, are applicable. With respect to Claim 18, rejection analogous to those presented for claim 7, are applicable. With respect to Claim 19, rejection analogous to those presented for claim 8, are applicable. With respect to Claim 20, rejection analogous to those presented for claim 9, are applicable. With respect to Claim 21, rejection analogous to those presented for claim 11, are applicable. With respect to Claim 23, Dey’378 shows a magnetic resonance (MR) system (paragraph [0098] magnetic resonance imaging (MRI) systems) comprising: a magnet configured to generate a main magnetic field about at least a portion of a subject arranged in the MRI system (paragraph [0005] a magnetics system having a plurality of magnetics components to produce magnetic fields for performing MRI, paragraph [0176] magnet 1422); a gradient coil assembly configured to apply at least one gradient field to the main magnetic field (paragraph [0176] RF transmit and receive coils 1426, and gradient coils 1428, paragraph [0177] gradient coils 1428 may be arranged to provide gradient fields and, for example, may be arranged to generate gradients in the magnetic field in three substantially orthogonal directions (X, Y, Z) to localize where MR signals are induced. In some embodiments, one or more magnetics components 1420 (e.g., shims 1424 and/or gradient coils 1428) may be fabricated using the laminate techniques); a radio frequency (RF) system configured to apply an RF field to the subject and to receive magnetic resonance signals from the subject (paragraphs [0176]-[0178] coils that may be used to generate RF pulses to induce a magnetic field Bi. The transmit/receive coil(s) may be configured to generate any suitable type of RF pulses configured to excite an MR response in a subject and detect the resulting MR signals emitted. RF transmit and receive coils 1426 may include one or multiple transmit coils and one or multiple receive coils); a processing system programmed to: generate image data of the subject from the MR signals (paragraph [0185] computing device 1404 may process received MR data to generate one or more MR images using any suitable image reconstruction process(es)); use a trained DL network to determine reconstructed image data from the image data of the subject (paragraphs [0077] and [0115] training a machine learning model to use as part of the denoising module for MR image reconstruction and denoising pipeline 1800 including an image reconstruction module and denoising module, paragraph [0122] neural network being a deep neural network); generate a medical image of the subject based on the reconstructed image data (paragraph [0097] generates a medical image using the acquired data); and wherein the DL network is trained using input image training data, wherein the input image training data includes raw image data and at least one [perturbation] signal (paragraph [0102] training a neural network using pair of images including a first image/raw generated using data collected by a medical imaging device and a second image generated by combining the first image with a noise image (e.g., a noise-corrupted image), paragraph [0122] shows the neural network being a deep neural network). Dey’378 does not show wherein the input image training data includes at least one perturbation signal. Gerdes’837 shows wherein the input image training data includes at least one perturbation signal (paragraphs [0068] placing a perturbation (such as oscillating perturbation d 145) on a ML model, sinusoidal perturbation is artificially generated to have known, pre-determined characteristics of amplitude, period, and waveform, the perturbation is small in amplitude relative to the amplitude of the test signal input to which the perturbation is applied). At the time of the invention, it would have been obvious to one of ordinary skill in the art before the effective filling date of the claim invention to modify Dey’378 to include wherein the input image training data includes at least one perturbation signal method taught by Gerdes’837. The suggestion/motivation for doing so would have been to improve the system’s ability to be able to avoid anomaly alerts that may compromise metrics for following or spillover. With respect to Claim 24, the combination of Dey’378 and Gerdes’837 shows the medical imaging system of claim 1, wherein the at least one perturbation signal is not a noise signal (in Gerdes’837: paragraph [0068] a perturbation (such as oscillating perturbation d 145) is placed on the testing portion of the data. In one embodiment, the sinusoidal perturbation is artificially generated to have known, pre-determined characteristics of amplitude, period, and waveform. In one embodiment, the perturbation is small in amplitude relative to the amplitude of the test signal input to which the perturbation is applied. In one embodiment, the perturbation is small in period so as to be able to repeat within and not go outside of the training range of the input signal. Thus, in one embodiment, the perturbation (such as oscillating perturbation d 145) used as a probe signal is small relative to the signal input to which it is applied. In one embodiment, the perturbation is periodic or repeating in nature so as to be clearly identifiable in the frequency domain. In one embodiment, the perturbation is sinusoidal, such as a mono-frequency pure sine wave). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IRIANA CRUZ whose telephone number is (571)270-3246. The examiner can normally be reached 10-6. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi M. Sarpong can be reached at (571) 270-3438. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IRIANA CRUZ/ Primary Examiner, Art Unit 2681
Read full office action

Prosecution Timeline

May 12, 2023
Application Filed
Oct 14, 2025
Non-Final Rejection — §101, §103
Jan 16, 2026
Response Filed
Apr 03, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597279
SYSTEMS AND METHODS FOR DYNAMICALLY PROVIDING NOTARY SESSIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12558047
MEDICAL IMAGE PROCESSING APPARATUS, X-RAY DIAGNOSIS APPARATUS, AND NON-VOLATILE COMPUTER-READABLE STORAGE MEDIUM STORING THEREIN MEDICAL IMAGE PROCESSING PROGRAM
2y 5m to grant Granted Feb 24, 2026
Patent 12551156
FIBROSIS MEASUREMENT DEVICE, FIBROSIS MEASUREMENT METHOD AND PROPERTY MEASUREMENT DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12544188
METHOD FOR CONSTRUCTING AND DISPLAYING 3D COMPUTER MODELS OF THE TEMPOROMANDIBULAR JOINTS
2y 5m to grant Granted Feb 10, 2026
Patent 12541331
IMAGE FORMING SYSTEM FOR INSPECTING AN IMAGE FORMED ON A SHEET BASED ON AN IMAGE EDITING PROCESS INTENSITY
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
91%
With Interview (+9.3%)
2y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 726 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month