Prosecution Insights
Last updated: April 19, 2026
Application No. 18/488,012

CALIBRATION METHODS AND SYSTEMS FOR IMAGING FIELD

Non-Final OA §101§102
Filed
Oct 16, 2023
Examiner
LAU, TUNG S
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Shanghai United Imaging Healthcare Co. Ltd.
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
97%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
921 granted / 1112 resolved
+14.8% vs TC avg
Moderate +14% lift
Without
With
+14.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
38 currently pending
Career history
1150
Total Applications
across all art units

Statute-Specific Performance

§101
20.9%
-19.1% vs TC avg
§103
23.1%
-16.9% vs TC avg
§102
27.9%
-12.1% vs TC avg
§112
14.3%
-25.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1112 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. DETAILED ACTION Preliminary Amendment Preliminary Amendment filed on 10/16/2023 noted by the examiner, claims 1-18, 21 and 32 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18, 21 and 32 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1, Step 1 the claim is a process (or machine) (Yes), Step 2A Prong One, does the claim recite an abstract idea? current claim related to a calibration method for imaging field, comprising: obtaining a calibration model of a target imaging device, wherein the calibration model includes at least one convolutional layer, the at least one convolutional layer includes at least one candidate convolution kernel appears to be an abstract idea of mental process (MPEP 2106.04(a)) or data gathering equivalent to mathematical concept or mathematical manipulation function (MPEP 2106.04 (a) (2) (concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula), (OR Mathematical Concepts and Mental Processes) Step 2A Prong One: Yes. Step 2A Prong Two, is the claim directed to an abstract idea? In other words, does claim recite additional elements that integrate the Judicial Exception into a practical application? the additional elements of determining a target convolution kernel based on the at least one candidate convolution kernel of the calibration model are recited at a high level of generality and merely amount to a particular field of use (see MPEP 2106.05(h)) and/or insignificant post-solution activity (MPEP 2106.05(g)), this does not integrate the Judicial Exception into a practical application, Step 2A Prong Two: NO. Step 2B, Does the claim recite additional element that amount to significantly more than the Judicial exception? the additional element of determining calibration information of the target imaging device based on the target convolution kernel, wherein the calibration information is used to calibrate at least one of a device parameter of the target imaging device or imaging data acquired by the target imaging device appears to be field of use (See MPEP 2106.05(h) and MPEP 2106.05(f)) and/or merely amounts to insignificant extra-solution output of the results (see MPEP 2106.05(g)) and therefore fails to integrate the abstract idea into a practical application or amount to significantly more. Step 2B: No. claim 1 not eligible. Claim 17, Step 1 the claim is a process (or machine) (Yes), Step 2A Prong One, does the claim recite an abstract idea? current claim related to a calibration system for imaging field, comprising: obtain a calibration model of a target imaging device, wherein the calibration model includes at least one convolutional layer, the at least one convolutional layer includes at least one candidate convolution kernel appears to be an abstract idea of mental process (MPEP 2106.04(a)) or data gathering equivalent to mathematical concept or mathematical manipulation function (MPEP 2106.04 (a) (2) (concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula), (OR Mathematical Concepts and Mental Processes) Step 2A Prong One: Yes. Step 2A Prong Two, is the claim directed to an abstract idea? In other words, does claim recite additional elements that integrate the Judicial Exception into a practical application? the additional elements of determine a target convolution kernel based on the at least one candidate convolution kernel of the calibration model are recited at a high level of generality and merely amount to a particular field of use (see MPEP 2106.05(h)) and/or insignificant post-solution activity (MPEP 2106.05(g)), this does not integrate the Judicial Exception into a practical application, Step 2A Prong Two: NO. Step 2B, Does the claim recite additional element that amount to significantly more than the Judicial exception? the additional element of determine calibration information of the target imaging device based on the target convolution kernel, wherein the calibration information is used to calibrate at least one of a device parameter of the target imaging device and imaging data acquired by the target imaging device, at least one storage medium storing a set of instructions; at least one processor in communication with the at least one storage medium, when executing the stored set of instructions, the at least one processor causes the system to appears to be field of use (See MPEP 2106.05(h) and MPEP 2106.05(f)) and/or merely amounts to insignificant extra-solution output of the results (see MPEP 2106.05(g)) and therefore fails to integrate the abstract idea into a practical application or amount to significantly more. Step 2B: No. claim 17 not eligible. Claim 32, Step 1 the claim is a process (or machine) (Yes), Step 2A Prong One, does the claim recite an abstract idea? current claim related to a non-transitory computer readable medium including executable instructions, the instructions, when executed by at least one processor, causing the at least one processor to effectuate a method comprising: obtaining a calibration model of a target imaging device, wherein the calibration model includes at least one convolutional layer, the at least one convolutional layer includes at least one candidate convolution kernel appears to be an abstract idea of mental process (MPEP 2106.04(a)) or data gathering equivalent to mathematical concept or mathematical manipulation function (MPEP 2106.04 (a) (2) (concept need not be expressed in mathematical symbols, because "[w]ords used in a claim operating on data to solve a problem can serve the same purpose as a formula), (OR Mathematical Concepts and Mental Processes) Step 2A Prong One: Yes. Step 2A Prong Two, is the claim directed to an abstract idea? In other words, does claim recite additional elements that integrate the Judicial Exception into a practical application? the additional elements of determining a target convolution kernel based on the at least one candidate convolution kernel of the calibration model are recited at a high level of generality and merely amount to a particular field of use (see MPEP 2106.05(h)) and/or insignificant post-solution activity (MPEP 2106.05(g)), this does not integrate the Judicial Exception into a practical application, Step 2A Prong Two: NO. Step 2B, Does the claim recite additional element that amount to significantly more than the Judicial exception? the additional element of determining calibration information of the target imaging device based on the target convolution kernel, wherein the calibration information is used to calibrate at least one of a device parameter of the target imaging device and imaging data acquired by the target imaging device appears to be field of use (See MPEP 2106.05(h) and MPEP 2106.05(f)) and/or merely amounts to insignificant extra-solution output of the results (see MPEP 2106.05(g)) and therefore fails to integrate the abstract idea into a practical application or amount to significantly more. Step 2B: No. claim 32 not eligible. Claim 2 related to wherein the calibration information includes at least one of mechanical deviation information of the target imaging device, crosstalk information of the target imaging device, or scattering information of the target imaging device appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 2 not eligible. Claim 3 related to wherein the determining the target convolution kernel based on the at least one candidate convolution kernel of the calibration model includes: determining the target convolution kernel by convolving the at least one candidate convolution kernel appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 3 not eligible. Claim 4 related to wherein the determining a target convolution kernel based on the at least one candidate convolution kernel of the calibration model includes: determining an input matrix based on the size of the at least one candidate convolution kernel; and determining the target convolution kernel by inputting the input matrix into the calibration model appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 4 not eligible. Claim 5 related to obtaining first projection data of a reference object, wherein the first projection data is acquired by the target imaging device, and the first projection data includes deviation projection data; obtaining second projection data of the reference object, wherein the second projection data excludes the deviation projection data; determining training data based on the first projection data and the second projection data, and generating the calibration model by training a preliminary model using the training data appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 5 not eligible. Claim 6 related to wherein the generating the calibration model by training a preliminary model using the training samples includes one or more iterations, at least one of the one or more iterations comprising: determining an intermediate convolution kernel of an updated preliminary model generated in a previous iteration; determining a value of a loss function based on the first projection data, the second projection data, and the intermediate convolution kernel; and further updating the updated preliminary model to be used in a next iteration based on the value of the loss function appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 6 not eligible. Claim 7 related to wherein the determining a value of a loss function based on the first projection data, the second projection data, and the intermediate convolution kernel includes: determining the value of the loss function based on at least one of a value of a first loss function and a value of a second loss function, wherein the value of the first loss function is determined based on the intermediate convolution kernel, and the value of the second loss function is determined based on the first projection data and the second projection data appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 7 not eligible. Claim 8 related to the target imaging device including a detector, the detector including a plurality of detection units, and the calibration information including a positional deviation of a target detection unit among the plurality of detection units, wherein the determining calibration information of the target imaging device based on the target convolution kernel includes: determining at least one first difference between a central element of the target convolution kernel and at least one other element of the target convolution kernel; determining at least one second difference between a projection position of the target detection unit and at least one projection position of at least one other detection unit of the detector; and determining the positional deviation of the target detection unit based on the at least one first difference and the at least one second difference appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 8 not eligible. Claim 9 related to wherein the target imaging device includes a radiation source, and the calibration information includes mechanical deviation information of the radiation source appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 9 not eligible. Claim 10 related to the target imaging device including a detector, the detector including a plurality of detection units, and the calibration information including a crosstalk coefficient of a target detection unit among the plurality of detection units, wherein the determining calibration information of the target imaging device based on the target convolution kernel includes:determining, based on at least one difference between a central element of the target convolution kernel and at least one other element of the target convolution kernel, at least one crosstalk coefficient of the at least one other element with respect to the target detection unit appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 10 not eligible. Claim 11 related to wherein the at least one other element includes at least two other elements in a same target direction, and the determining calibration information of the target imaging device based on the target convolution kernel further comprises: determining a first crosstalk coefficient of the target detection unit in the target direction based on a sum of the crosstalk coefficients of the at least two other elements with respect to the target detection unit appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 11 not eligible. Claim 12 related to wherein the determining calibration information of the target imaging device based on the target convolution kernel further includes: determining a second crosstalk coefficient of the target detection unit in the target direction based on a difference between the crosstalk coefficients of the at least two other elements with respect to the target detection unit appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 12 not eligible. Claim 13 related to the calibration information including scattering information of the target imaging device, wherein the determining calibration information of the target imaging device based on the target convolution kernel includes: determining scattering information of the target imaging device corresponding to at least one angle of view based on the target convolution kernel appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 13 not eligible. Claim 14 related to the calibration model also including a first activation function and a second activation function, wherein the first activation function is used to transform input data of the calibration model from projection data to data of a target type, the data of the target type being input to the at least one convolutional layer for processing; andthe second activation function is used to transform output data of the at least one convolutional layer from the data of the target type to projection data appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 14 not eligible. Claim 15 related to wherein the calibration model also includes a fusion unit, and the fusion unit is configured to fuse the input data and the output data of the at least one convolutional layer appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 15 not eligible. Claim 16 related to the calibration information of the target imaging device including calibration information relating to defocusing of the target imaging device, wherein the calibration model also includes a data transformation unit, wherein the data transformation unit is configured to transform the data of the first target type to determine transformed data, and the transformed data is input to the at least one convolutional layer for processing appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 16 not eligible. Claim 18 related to wherein the calibration information includes at least one of mechanical deviation information of the target imaging device, crosstalk information of the target imaging device, or scattering information of the target imaging device appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 18 not eligible. Claim 21 related to obtaining first projection data of a reference object, wherein the first projection data is acquired by the target imaging device, and the first projection data includes deviation projection data; obtaining second projection data of the reference object, wherein the second projection data excludes the deviation projection data; determining training data based on the first projection data and the second projection data, and generating the calibration model by training a preliminary model using the training data appears recite further data characterization and mathematical concepts that are part of the abstract idea, claim 21 not eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-18, 21 and 32 are rejected under 35 U.S.C. 102 (a) (1) as being anticipated by Sun, CN 109284669 A, DATE PUBLISHED: 2019-01-29, G06N 3/045. Regarding claim 1: Sun described a calibration method for imaging field (page 2, universal target detection, page 2, determine the image of the pedestriant), comprising: obtaining a calibration model of a target imaging device, wherein the calibration model includes at least one convolutional layer (page 6, convolutional layer), the at least one convolutional layer includes at least one candidate convolution kernel (page 6, convolution kernel); determining a target convolution kernel based on the at least one candidate convolution kernel of the calibration model (page 6, convolution kernel detection model); and determining calibration information of the target imaging device based on the target convolution kernel, wherein the calibration information is used to calibrate at least one of a device parameter of the target imaging device or imaging data acquired by the target imaging device (page 2, determine the image of the pedestriant , page 11, adjusting resolution of the input image). Regarding claim 17: Sun described a calibration a calibration system for imaging field (page 2, universal target detection, page 2, determine the image of the pedestriant) comprising: at least one storage medium storing a set of instructions (page 6, computer); at least one processor in communication with the at least one storage medium, when executing the stored set of instructions, the at least one processor causes the system to (page 6, computer): obtain a calibration model of a target imaging device, wherein the calibration model includes at least one convolutional layer (page 6, convolutional layer), the at least one convolutional layer includes at least one candidate convolution kernel (page 6, convolution kernel); determine a target convolution kernel based on the at least one candidate convolution kernel of the calibration model (page 6, convolution kernel detection model); and determine calibration information of the target imaging device based on the target convolution kernel, wherein the calibration information is used to calibrate at least one of a device parameter of the target imaging device and imaging data acquired by the target imaging device (page 2, determine the image of the pedestriant, page 11, adjusting resolution of the input image). Regarding claim 32: Sun described a non-transitory computer readable medium including executable instructions, the instructions, when executed by at least one processor, causing the at least one processor to effectuate a method comprising (page 6, computer): obtaining a calibration model of a target imaging device, wherein the calibration model includes at least one convolutional layer, the at least one convolutional layer (page 6, convolutional layer) includes at least one candidate convolution kernel (page 6, convolution kernel); determining a target convolution kernel based on the at least one candidate convolution kernel of the calibration model (page 6, convolution kernel detection model; and determining calibration information of the target imaging device based on the target convolution kernel, wherein the calibration information is used to calibrate at least one of a device parameter of the target imaging device and imaging data acquired by the target imaging device (page 2, determine the image of the pedestriant, page 11, adjusting resolution of the input image). Regarding claim 2, Sun further described wherein the calibration information includes at least one of mechanical deviation information of the target imaging device (page 7, imagine resolution or dimensionality reduction), crosstalk information of the target imaging device, or scattering information of the target imaging device. Regarding claim 3, Sun further described wherein the determining the target convolution kernel based on the at least one candidate convolution kernel of the calibration model includes: determining the target convolution kernel by convolving the at least one candidate (page 7, convolution kernel is 3 * 3). Regarding claim 4, Sun further described wherein the determining a target convolution kernel based on the at least one candidate convolution kernel of the calibration model includes: determining an input matrix based on the size of the at least one candidate convolution kernel; and determining the target convolution kernel by inputting the input matrix into the calibration model (page 7, convolution kernel is 3 * 3). Regarding claim 5, Sun further described obtaining first projection data of a reference object, wherein the first projection data is acquired by the target imaging device, and the first projection data includes deviation projection data (page 4, direct sampling of Pooling. and adding a side-by-side Convolution); obtaining second projection data of the reference object, wherein the second projection data excludes the deviation projection data (page 4, semantic Mask recognition); determining training data based on the first projection data and the second projection data, and generating the calibration model by training a preliminary model using the training data (page 4, and convenient; compared with Faster RCNN, use RCNN). Regarding claim 6, Sun further described determining an intermediate convolution kernel of an updated preliminary model generated in a previous iteration (page 4, ROI); determining a value of a loss function based on the first projection data (page 9, Lcls), the second projection data, and the intermediate convolution kernel (page 9, Lbox); and further updating the updated preliminary model to be used in a next iteration based on the value of the loss function (page 9, loss function L). Regarding claim 7, Sun further described determining the value of the loss function based on at least one of a value of a first loss function and a value of a second loss function (page 9, Lcls), wherein the value of the first loss function is determined based on the intermediate convolution kernel (page 9, Lbox), and the value of the second loss function is determined based on the first projection data and the second projection data (page 9, loss function L). Regarding claim 8, Sun further described the target imaging device including a detector, the detector including a plurality of detection units (page 2, RCNN, MR-FPP), and the calibration information including a positional deviation of a target detection unit among the plurality of detection units, wherein the determining calibration information of the target imaging device based on the target convolution kernel includes: determining at least one first difference between a central element of the target convolution kernel and at least one other element of the target convolution kernel (page 2, pedestrian detection at different road condition); determining at least one second difference between a projection position of the target detection unit and at least one projection position of at least one other detection unit of the detector; and determining the positional deviation of the target detection unit based on the at least one first difference and the at least one second difference (page 4, using MR-FPPI (Miss rate against false positives per) curve index test result, It can be found that the method can obviously improve the detection performance.). Regarding claim 9, Sun further described wherein the target imaging device includes a radiation source, and the calibration information includes mechanical deviation information of the radiation source (page 2, experiment result surface, the method can carry out pedestrian detection at different road condition of complex environment). Regarding claim 10, Sun further described the target imaging device including a detector, the detector including a plurality of detection units, and the calibration information including a crosstalk coefficient of a target detection unit among the plurality of detection units (page 9, selecting cross entropy as the measurement standard), wherein the determining calibration information of the target imaging device based on the target convolution kernel includes: determining, based on at least one difference between a central element of the target convolution kernel and at least one other element of the target convolution kernel, at least one crosstalk coefficient of the at least one other element with respect to the target detection unit (page 9, The loss of the target segmentation result). Regarding claim 11, Sun further described wherein the at least one other element includes at least two other elements in a same target direction (page 9, the positive and negative sample ratio is 1: 2.), and the determining calibration information of the target imaging device based on the target convolution kernel further comprises: determining a first crosstalk coefficient of the target detection unit in the target direction based on a sum of the crosstalk coefficients of the at least two other elements with respect to the target detection unit (page 9, regression loss, the loss of the target segmentation result). Regarding claim 12, Sun further described wherein the determining calibration information of the target imaging device based on the target convolution kernel further includes: determining a second crosstalk coefficient of the target detection unit in the target direction based on a difference between the crosstalk coefficients of the at least two other elements with respect to the target detection unit (page 8-9, 1*1 of convolution layer processing with vector dimention, probability values). Regarding claim 13, Sun further described the calibration information including scattering information of the target imaging device, wherein the determining calibration information of the target imaging device based on the target convolution kernel includes: determining scattering information of the target imaging device corresponding to at least one angle of view based on the target convolution kernel (page 5, targets were observed at certain angles, page 9, overlapping rate). Regarding claim 14, Sun further described the calibration model also including a first activation function and a second activation function, wherein the first activation function is used to transform input data of the calibration model from projection data to data of a target type (page 2, different surfaces), the data of the target type being input to the at least one convolutional layer for processing; and the second activation function is used to transform output data of the at least one convolutional layer from the data of the target type to projection data (page 6, max pooling layer after the x branch is added with convolution kernel is 1*1 of convolution layer). Regarding claim 15, Sun further described wherein the calibration model also includes a fusion unit, and the fusion unit is configured to fuse the input data and the output data of the at least one convolutional layer (page 10, reduced resolution to 1024). Regarding claim 16, Sun further described the calibration information of the target imaging device including calibration information relating to defocusing of the target imaging device, wherein the calibration model also includes a data transformation unit, wherein the data transformation unit is configured to transform the data of the first target type to determine transformed data, and the transformed data is input to the at least one convolutional layer for processing (page 12, two paths of convolution layer, BN layer, ReLU layer,). Regarding claim 18, Sun further described wherein the calibration information includes at least one of mechanical deviation information of the target imaging device (page 7, imagine resolution or dimensionality reduction), crosstalk information of the target imaging device, or scattering information of the target imaging device. Regarding claim 21, Sun further described obtaining first projection data of a reference object, wherein the first projection data is acquired by the target imaging device (fig. 4), and the first projection data includes deviation projection data; obtaining second projection data of the reference object (fig. 4, background), wherein the second projection data excludes the deviation projection data; determining training data based on the first projection data and the second projection data (fig. 5, person 0.999), and generating the calibration model by training a preliminary model using the training data (fig. 5, person target). . Contact information 4. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tung Lau whose telephone number is (571)272-2274, email is Tungs.lau@uspto.gov. The examiner can normally be reached on Tuesday-Friday 7:00 AM-5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TURNER SHELBY, can be reached on 571-272-6334. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll- free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000. /TUNG S LAU/Primary Examiner, Art Unit 2857 Technology Center 2800 February 24, 2026 .
Read full office action

Prosecution Timeline

Oct 16, 2023
Application Filed
Feb 24, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596049
SEALING COMPONENT INSPECTION METHOD, INSPECTION DEVICE, AND INSPECTION PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12596034
METHOD AND SYSTEM FOR ADAPTING TO SPECIFIC TARGET PAINT APPLICATION PROCESSES
2y 5m to grant Granted Apr 07, 2026
Patent 12584964
SYSTEM FOR DIAGNOSING DRY ELECTRODE MIXTURE
2y 5m to grant Granted Mar 24, 2026
Patent 12584948
CONSUMED POWER CALCULATION METHOD FOR ELECTRIC MOTOR AND INDUSTRIAL MACHINE
2y 5m to grant Granted Mar 24, 2026
Patent 12575364
ABNORMALITY DETECTION DEVICE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
97%
With Interview (+14.0%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 1112 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month