Prosecution Insights
Last updated: April 19, 2026
Application No. 18/474,476

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM

Non-Final OA §102§103§112
Filed
Sep 26, 2023
Examiner
PEDAPATI, CHANDHANA
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Terumo Kabushiki Kaisha
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
2y 10m
To Grant
96%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
14 granted / 22 resolved
+1.6% vs TC avg
Strong +32% interview lift
Without
With
+32.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
26 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
11.7%
-28.3% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
20.9%
-19.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 22 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicant Limitations appearing inside {} are intended to indicate limitations not taught by said prior art(s)/combinations. Claims 1-20 are pending Information Disclosure Statement The information disclosure statement(s) filed on 09/15/2023 and 10/12/2023 have been considered. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Classification image data acquisition unit in claim 1 Merging determination unit in claim 1 Image output unit in claim 1 Classification change unit in claims 2 and 9 Three-dimensional image output unit in claim 3 Radial image output unit in claim 4 Linear image output unit in claim 5 Catheter image acquisition unit in claim 6 Classification image data generation unit in claim 6 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7 and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 and Claim 17 recite the limitation “and generate the classification image data based on acquired classification image data”. It is unclear how the classification image data is generated based upon itself. For the purpose of examination, and to be consistent with preceding recitation of claim 7, “wherein the classification image data generation unit is configured to input… the acquired catheter image to a trained model”, the limitation is read as follows “and generate the classification image data based on acquired catheter image”. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-6, 8-9, 11-16, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sakamoto, US 20150257850 A1, as cited in the IDS. Regarding claim 1, Sakamoto teaches an information processing device comprising: a classification image data acquisition unit configured to acquire a plurality of classification image data (¶[0048]; Step S305, the image acquisition unit 210 acquires the blood vessel image 190), the plurality of classification image data being generated based on a plurality of catheter images acquired using an image acquisition catheter that acquires an image while moving in an axial direction on a scan plane (¶[0033]; multiple line data items can be obtained by performing scanning while changing an orientation of the probe and a position of the probe in a longitudinal direction of the blood vessel (i.e., axial direction)), the plurality of classification image data being classified into a plurality of regions including a first intraluminal region into which the image acquisition catheter is inserted and a second intraluminal region into which the image acquisition catheter is not inserted (¶[0035]; the boundary extraction unit 200 can detect a portion corresponding to the main blood vessel from the blood vessel image 190 … and a portion corresponding to the bifurcated blood vessel bifurcated from the main blood vessel from the blood vessel image 190) a merging determination unit configured to determine whether the second intraluminal region in a first catheter image of the plurality of catheter images merges with the first intraluminal region (¶[0066]; Step S330, the determination unit 240 determines whether the intersection point indicates the main blood vessel, based on the distance from the center point of the main blood vessel to the intersection point; determination unit 240 determines whether the intersection point indicates the bifurcated blood vessel) in a second catheter image acquired at an axial position different from an axial position of the first catheter image (¶[0070]; with regard to the intersection points present in the adjacent angular directions (i.e., considering first and second catheter images), when one intersection point indicates the main blood vessel and the other intersection point indicates the bifurcated blood vessel, the determination unit 240 determines that the intersection point indicating the main blood vessel indicates the boundary between the main blood vessel and the bifurcated blood vessel); an image output unit configured to output a region image (Sakamoto, ¶[0110]; display unit 120 to display the determination result) including the first intraluminal region based on the plurality of classification image data (Sakamoto, ¶[0110] and See Fig 9 exhibits display of main blood vessel (i.e., first intraluminal region)); and wherein the image output unit is configured to output, of the second intraluminal region, only the second intraluminal region in the first catheter image that is determined to merge by the merging determination unit, together with the first intraluminal region as the region image (Sakamoto, See Fig 9 and ¶[0110]; dots 911 to 915 display the intersection point indicating the main blood vessel, the intersection point indicating the bifurcated blood vessel; the intersection point indicating the boundary point between the main blood vessel and the bifurcated blood vessel). Regarding claim 2, Sakamoto teaches the information processing device according to claim 1. Sakamoto further teaches further comprising: a classification change unit (Sakamoto, ¶[0074]; S345, the determination update unit 250) configured to change classification of the second intraluminal region in the first catheter image of the classification image data that is determined to merge by the merging determination unit to the first intraluminal region (Sakamoto, ¶[0081]; the preceding tomographic image is compared with the subsequent tomographic image (i.e., first and second catheter images); ¶[0082]; the determination update unit 250 determines that the intersection point determined to indicate the bifurcated blood vessel by the determination unit 240 indicates the main blood vessel); and the image output unit is configured to output, of the second intraluminal region acquired by the classification image data acquisition unit, only the second intraluminal region whose classification is changed by the classification change unit together with the first intraluminal region as the region image (Sakamoto, ¶[0110]; FIG. 9. Dots 911 to 915, which indicate detected intersection points and are displayed in mutually different colors are displayed by being superimposed on one another on the cross-sectional image 910. The respective dots 911 to 915 display the intersection point indicating the main blood vessel, the intersection point indicating the bifurcated blood vessel; the determination results automatically generated by the determination unit 240 or the determination update unit 250 can be displayed, thereby enabling a user to relatively easily recognize a portion corresponding to the bifurcated blood vessel). Regarding claim 3, Sakamoto teaches the information processing device according to claim 1. Sakamoto further teaches wherein the image output unit includes a three-dimensional image output unit configured to output a three-dimensional image including the first intraluminal region as the region image based on the plurality of classification image data (Sakamoto, Fig 15 exhibits 3D image 1570, and ¶[0145]; the display control unit 110 can cause the display unit 120 to display the display screen, which can include at least one of the cross-sectional image, the longitudinal image, and the three-dimensional image of the blood vessel). Regarding claim 4, Sakamoto teaches the information processing device according to claim 1. Sakamoto further teaches wherein the image acquisition catheter is a radial scan catheter (Sakamoto, ¶[0032]; intravascular ultrasound (IVUS) diagnosis apparatus, an optical coherence tomography (OCT) diagnosis apparatus, or an optical frequency-domain imaging (OFDI)); the information processing device further comprises a radial image output unit configured to output one of the plurality of catheter images as a radial two- dimensional image (Sakamoto, ¶[0110]; A cross-sectional image 910 … displayed on a display screen 900); and the image output unit is configured to output the region image generated based on the catheter images so as to be superimposed on the radial two- dimensional image (Sakamoto, ¶[0110]; Dots 911 to 915, which indicate detected intersection points and are displayed in mutually different colors are displayed by being superimposed on one another on the cross-sectional image 910). Regarding claim 5, Sakamoto teaches the information processing device according to claim 1. Sakamoto further teaches wherein the image acquisition catheter is a radial scan catheter(Sakamoto, ¶[0032]; intravascular ultrasound (IVUS) diagnosis apparatus, an optical coherence tomography (OCT) diagnosis apparatus, or an optical frequency-domain imaging (OFDI)); a linear image output unit configured to output a linear two-dimensional image along the axial direction (Sakamoto, ¶[0146]; longitudinal image 1510 is displayed on a display screen 1500); and the image output unit is configured to output the region image so as to be superimposed on the linear two-dimensional image (Sakamoto, See Fig 15, shown below, and ¶[0110]; the dots 911 to 915 can also be displayed by being superimposed on one another on the longitudinal image). PNG media_image1.png 390 485 media_image1.png Greyscale Regarding claim 6, Sakamoto teaches the information processing device according to claim 1. Sakamoto further teaches further comprising: a catheter image acquisition unit configured to acquire the plurality of catheter images (Sakamoto, ¶[0048]; Step S305, the image acquisition unit 210 is adapted to acquire the blood vessel image 190 configured to have multiple tomographic images at a time); and a classification image data generation unit configured to classify the plurality of catheter images into a plurality of regions including the first intraluminal region and the second intraluminal region (Sakamoto, ¶[0039]; Processing can be performed on the respective tomographic images by each unit, thereby generating the information indicating the portion corresponding to the main blood vessel and the information distinguishing the portion corresponding to the bifurcated blood vessel) and to generate the classification image data (Sakamoto, ¶[0113]; generation unit 1020 generates quantitative information indicating a form of the bifurcated blood vessel in the bifurcated portion from the main blood vessel by using the information acquired by the information acquisition unit 1010). Regarding claim 8, Sakamoto teaches the information processing device according to claim 6. Sakamoto further teaches wherein the catheter image acquisition unit is configured to sequentially acquire catheter images acquired using the image acquisition catheter in real time (Sakamoto, ¶[0048]; the image acquisition unit 210 may sequentially acquire the tomographic images, which are sequentially generated while the blood vessel is scanned using the probe); and the classification image data generation unit is configured to sequentially generate the classification image data (Sakamoto, ¶[0048]; processing subsequent to Step S310 is sequentially performed on each tomographic image). Regarding claim 9, Sakamoto teaches the information processing device according to claim 6. Sakamoto further teaches further comprising: a classification change unit configured to change the second intraluminal region in the first catheter image determined to merge by the merging determination unit to the first intraluminal region (Sakamoto, ¶[0078]; in view of a size of the blood vessel on the cross-sectional image on the upstream side or the downstream side, the determination update unit 250 determines whether or not the determination that the detected intersection points 504 and 506 indicate the main blood vessel is appropriate; ¶[0082]; the determination update unit 250 determines that the intersection point determined to indicate the bifurcated blood vessel by the determination unit 240 indicates the main blood vessel); and the classification change unit is configured to sequentially process the classification image data generated by the classification image data generation unit (Sakamoto, ¶[0079]; determination update unit 250 can sequentially detect the intersection points indicating the main blood vessel by setting). Claim 11 is similarly rejected as analogous claim 1. Claim 12 is similarly rejected as analogous claim 2. Claim 13 is similarly rejected as analogous claim 3. Claim 14 is similarly rejected as analogous claim 4. Claim 15 is similarly rejected as analogous claim 5. Claim 16 is similarly rejected as analogous claim 6. Claim 20 is similarly rejected as analogous claim 1. Sakamoto further teaches a non-transitory computer-readable medium storing a program causing a computer to execute a process comprising (Sakamoto, ¶[0152]; a computer-readable storage medium 1630). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 7, 17, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Sakamoto in view of Balocco et al., US 20120130243 A1, as cited in the IDS, hereinafter Balocco. Regarding claim 7, Sakamoto teaches the information processing device according to claim 6. Sakamoto further teaches wherein the classification image data generation unit is configured to input, when receiving a catheter image, the acquired catheter image to {a trained} model that outputs classification image data obtained by classifying each region of the catheter image into a predetermined region, and generate the classification image data based on acquired classification image data (Sakamoto, ¶[0039]; the image acquisition unit 210 acquires multiple tomographic images. Processing can be performed on the respective tomographic images by each unit thereby generating the information indicating the portion corresponding to the main blood vessel and the information distinguishing the portion corresponding to the bifurcated blood vessel). Sakamoto does not explicitly teach a trained model. Balocco further teaches a trained model (Balocco, ¶[0052]; automated classifier that is trained to identify bifurcations; ¶[0069] classify the portions of the IVUS data and detect bifurcations). Sakamoto and Balocco are analogous art because they are from the same field of endeavor of detecting and displaying bifurcations in body lumens, such as vascular bifurcations. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include a trained model as taught by Balocco to the invention of Sakamoto. The motivation to do so would be to automate analysis of a sequency of frames. Claim 17 is similarly rejected as analogous claim 7. Regarding claim 18, Sakamoto teaches the information processing device according to claim 16. Sakamoto further teaches further comprising: sequentially acquiring catheter images acquired using the image acquisition catheter {in real time} (Sakamoto, ¶[0048]; the image acquisition unit 210 may sequentially acquire the tomographic images); sequentially generating the classification image data (Sakamoto, ¶[0048]; which are sequentially generated while the blood vessel is scanned using the probe); changing the second intraluminal region in the first catheter image determined to merge to the first intraluminal region (Sakamoto, ¶[0078]; in view of a size of the blood vessel on the cross-sectional image on the upstream side or the downstream side, the determination update unit 250 determines whether or not the determination that the detected intersection points 504 and 506 indicate the main blood vessel is appropriate; ¶[0082]; the determination update unit 250 determines that the intersection point determined to indicate the bifurcated blood vessel by the determination unit 240 indicates the main blood vessel); and sequentially processing the generated classification image data (Sakamoto, ¶[0079]; determination update unit 250 can sequentially detect the intersection points indicating the main blood vessel by setting). Sakamoto does not explicitly teach using the image acquisition catheter in real time. However, Balocco teaches using the image acquisition catheter in real time (Balocco, ¶[0003]; IVUS imaging systems can be used in real (or almost real) time). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include real-time imaging as taught by Balocco to the invention of Sakamoto. The motivation to do so would be to monitor or assess ongoing intravascular treatments, such as angiography and stent placement. Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Sakamoto in view of Balocco, and further in view of Li et al., US 20200226422 A1, as cited in the IDS, hereinafter Li. Regarding claim 10, the combination of Sakamoto and Balocco teaches the information processing device according to claim 1. Sakamoto further teaches wherein the classification image data is classified into the first intraluminal region, the second intraluminal region (Sakamoto, ¶[0035]; the boundary extraction unit 200 can detect a portion corresponding to the main blood vessel from the blood vessel image 190; the boundary extraction unit 200 can detect a portion corresponding to the bifurcated blood vessel bifurcated from the main blood vessel from the blood vessel image 190), {a biological tissue region (interpretated as “lumen organ is combined with a muscle, a nerve, fat, or the like adjacent to or close to the lumen organ”; specification ¶[0052]), and Balocco further teaches a non-intraluminal region that is none of the aforementioned regions (interpreted as “region in which a sufficiently clear image is not depicted due to an acoustic shadow or attenuation of ultrasound or the like is also in the non- intraluminal region 517”; specification ¶[0053]) (Balocco, ¶[0099]; since the textural pattern of the shadow is repeated along several frames of the sequence, it is possible to discriminate between the two structures by discarding, from the classification maps, the regions in which the longitudinal dimension is much more extended than the angular dimension). The combination of Sakamoto and Balocco do not explicitly teach classification of image data into a biological tissue region. However, Li teaches a classification of image data into biological tissue region (Li, ¶[0214]; classifying, identifying, and or characterizing various tissue types and features/regions of interest in patient image data and image data elements; plaques, calcium, calcified plaques, calcium containing tissue, lesions, fat, branching angles of arterial trees; and branching models; combinations of the foregoing and classification or types of the foregoing). Sakamoto and Li are analogous art because they are from the same field of endeavor of imaging arteries and segmenting and characterizing components. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include classification of different tissue types within the image frame as taught by Li to the combined invention of Sakamoto and Balocco. The motivation to do so would be to improve upon the problem of time and information management during time critical medical procedures such as those performed in the cath lab by enhancing viewing of image data that includes various characterized tissues and the boundaries and relative arrangement so that end users can reach planning decisions and make informed decisions based upon diagnostic information more quickly and with a more informed context than would otherwise be possible given a set of images with tissue characterized regions. Claim 19 is similarly rejected as analogous claim 9. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Nair et al., US 20140270429 A1, teaches systems and methods for tissue characterization using multiple independent pattern recognition models for use in catheterization procedures.. Kimmel, et al., US 20050249391 A1, teaches LEE et al., US 20190125173 A1, teaches distinguishing a gastrointestinal tract junction from the image using a machine-learned classification model. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANDHANA PEDAPATI whose telephone number is 571-272-5325. The examiner can normally be reached M-F 8:30am-6pm (ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached at 571-272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANDHANA PEDAPATI/ Examiner, Art Unit 2669 /CHAN S PARK/ Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Sep 26, 2023
Application Filed
Oct 22, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602896
IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12597095
INTELLIGENT SYSTEM AND METHOD OF ENHANCING IMAGES
2y 5m to grant Granted Apr 07, 2026
Patent 12571683
ELEVATED TEMPERATURE SCREENING SYSTEMS AND METHODS
2y 5m to grant Granted Mar 10, 2026
Patent 12548180
HOLE DIAMETER MEASURING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541829
MOTION-BASED PIXEL PROPAGATION FOR VIDEO INPAINTING
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
96%
With Interview (+32.5%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 22 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month