Prosecution Insights
Last updated: April 19, 2026
Application No. 18/768,241

System and Method for Aligning 3-D Imagery of a Patient's Oral Cavity in an Extended Reality (XR) Environment

Non-Final OA §103§112
Filed
Jul 10, 2024
Examiner
ZALALEE, SULTANA MARCIA
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Diagnocat Inc.
OA Round
1 (Non-Final)
71%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
86%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
346 granted / 488 resolved
+8.9% vs TC avg
Strong +15% interview lift
Without
With
+15.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
30 currently pending
Career history
518
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 488 resolved cases

Office Action

§103 §112
DETAILED ACTION Priority The application is a CIP of US applications 17564565, 17215315, 16783615 and 16175067. However none of parent applicant application discloses the feature of “registering the 3D imagery onto the display/feed's coordinates system, frame by frame while keeping temporal consistency between frames; and rendering the 3D imagery onto the video feed based on the video feed coordinates registered.” in the independent claims 1 and 12. In addition none of parent applicant application discloses the feature of “align the 3D object data with the video feed based on the identified corresponding points, wherein alignment by soft tissue points is used when the patient's mouth is closed, and alignment by hard tissue points is used when the patient's mouth is open; and display the aligned 3D objects in the XR environment.” In independent claim 25. Therefore the priority of the claims 1-21, 23-25 are considered as the current filing date 2024-07-10 for examination purpose. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “module” in claims 12-21, 23-24. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Objections Claim 2 is objected as it recites the abbreviation CBCT without providing its full meaning upon first use. Applicant is required to amend the claim to define the term, eg., cone beam computed tomography (CBCT). In addition claims 23-25 are objected to under 37 CFR 1.75 (f) as being incorrectly numbered resulting from missing claim 22. The claim numbers require corrections. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 7-8, 11-21, 23-25 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 7 recites the limitation “the points" in “comparing them to the points detected on the 3D image”. There is insufficient antecedent basis for this limitation in the claim, as the base claim 1 does not recite any “points”. Furthermore Claim 8 recites the limitation “the points" and “the 3D objects” in “comparing them to the points detected on the 3D objects”. There is insufficient antecedent basis for the limitations in the claim, as the base claim 1 does not recite any “points” or “3D objects”. In addition the limitations “comparing them to the points detected on the 3D image” and “comparing them to the points detected on the 3D objects” respectively in claims 7-8 appear incomplete for omitting essential essential steps or structural relationship, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted step/relationship are: Detecting points … on the 3D image/objects (with clear definition of “the points”). In addition Claim 11 recites “converting the mesh from the volumetric image and the mesh/points from the surface scan to a point cloud”. There is insufficient antecedent basis for the limitations “the mesh” in “the mesh from the volumetric image”, and “the points” in “the mesh/points” as the base claim 1 does not recite a mesh from the volumetric image”, or points from the surface scan. Claim 1 recites “the volumetric image comprises a three-dimensional voxel array” and “the surface scan comprises a polygonal mesh or point cloud”. It is also not clear if the points in “the mesh/points” intended to refer to the point cloud from the surface scan. Then converting the point cloud into point cloud would not make sense either. For examination purposes, it is interpreted as “converting the volumetric image and the mesh from the surface scan to a point cloud”. Claims 8 and 18 recite “such as teeth”. In addition, claims 12 (and 13-21, 23-24 depending thereon) and 25 recite “such as facial landmarks”. The phrase "such as" renders the claims indefinite because it is unclear whether the limitations following the phrase are intended to be limiting or merely exemplary as part of the invention. See MPEP § 2173.05(d). For Examination purposes, the examiner considers the limitations after “such as” ie., “teeth” and “facial landmarks”, as part of the invention for compact prosecution. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-9, 12-18, 20-21, 23-24 are rejected under 35 U.S.C. 103 as being unpatentable over Casas (US 20160191887 A1), and further in view of Saphier et al (US 20210321872 A1) and Shin et al (US 20200380274 A1). RE claim 1, Casas teaches A method for aligning three-dimensional (3D) imagery of a patients in an extended reality (XR) system (Figs 1-2, abstract, [0006]), said method comprising the steps of: receiving a 3D image comprising at least one of a volumetric image, surface scan, or a photograph with depth information of the patient, wherein the volumetric image comprises a three-dimensional voxel array representing an anatomical structure, and the surface scan comprises a polygonal mesh or point cloud representing the same anatomical structure, and a photo represents the same anatomical structure with associated depth values for each pixel (Figs 1-3, [0030]-[0031], [0076], [0078]); receiving at least one of a real-time display or video feed from XR goggles, or a prerecorded video (display/feed), wherein the display/feed provides real-time or prerecorded imagery of the patient's anatomical structure (Figs 1-3, [0031], [0014]); registering the 3D imagery onto the display/feed's coordinates system frame by frame, while keeping temporal consistency; and rendering the 3D imagery onto the video feed based on the video feed coordinates registered (Figs 1-3, [0006], [0009]- [0012], [0016]-[0017], [0032],[0035]-[0038], [0124], [0127] wherein real time tracking, pose update, and blending indicates keeping temporal consistency). Casas is silent RE: patient's oral cavity and the temporal consistency between frames. However Saphier teaches receiving intraoral scanning of an oral cavity of the patient in abstract, [0213] for capturing 3D dental structures for dental evaluation and treatment. In addition Shin teaches keeping temporal consistency across frames in [0044] using correlation response values, for learning correlation filters in object tracking in typical AR applications, which is readily available or can equally be applied to provide the seamless blended video stream in real-time, enhancing user experience Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Casas a system and method of receiving intraoral scanning of an oral cavity of the patient for dental evaluation and treatment, as suggested by Saphier, and further ensure temporal consistency across frames as suggested by Shin in order provide the seamless blended video stream in real-time extending the method and system to dentistry and thereby increasing system effectiveness and user experience. RE claim 2, Casas as modified by Saphier and Shin teaches wherein receiving the 3D image from a volumetric image involves capturing multiple CBCT images and reconstructing them into a three-dimensional voxel array that represents the patient's anatomical structure (Casas [0030], [0073]-[0075] wherein the 3D data set is rendered/reconstructed as the 3D volumetric image represented with a grid of voxels (volume elements). Furthermore Saphier Fig 2A, [0213], [0471], [0521]). RE claim 3, Casas as modified by Saphier and Shin teaches wherein receiving the 3D image from a surface scan involves intra-oral scanning the anatomical structure to produce a detailed polygonal mesh or point cloud that accurately reflects the surface contours (Casas [0076], [0083]. Furthermore Saphier Fig 2A, [0225], [0342], [0521]). RE claim 4, Casas as modified by Saphier and Shin teaches, wherein receiving a video feed from the XR goggles includes streaming real-time video data from the goggles' cameras to a processing unit that integrates the registered feed of the 3D image (Casas [0171]). RE claim 5, Casas as modified by Saphier and Shin teaches, wherein receiving a video feed from a prerecorded video includes accessing and processing stored video files that were captured during prior clinical sessions or procedures (Casas [0171]). RE claim 6, Casas as modified by Saphier and Shin teaches wherein the registering and rendering of the 3D imagery onto the video feed comprises using machine learning algorithms to automatically identify and label anatomical landmarks on the volumetric image and surface scan (Casas [0130], [0167]. Furthermore Saphier Figs 2A, 3, [0271]). RE claim 7, Casas as modified by Saphier and Shin teaches, wherein the registering and rendering of the 3D imagery onto the video feed comprises applying facial recognition algorithms to identify soft tissue landmarks and comparing them to the points detected on the 3D image (Casas [0130], [0089], [0092], [0108]. Furthermore Saphier Figs 2A, 3, [0233], [0258]- [0259] identifying facial/dental landmarks). RE claim 8, Casas as modified by Saphier and Shin teaches, wherein the registering and rendering of the 3D imagery onto the video feed comprises applying dental recognition algorithms to identify hard tissue landmarks, such as teeth, and comparing them to the points detected on the 3D objects (Casas [0130], [0089], [0092], [0108]. Furthermore Saphier Figs 2A, 3, [0233], [0258]- [0259], [0271] identifying facial/dental landmarks). RE claim 9, Casas as modified by Saphier and Shin is silent RE, wherein the registering and rendering of the 3D imagery onto the video feed comprises applying a fully convolutional U-Net-like architecture, to obtain a probability distribution over the location of every point of interest; selecting a location of maximum probability as a detection of a landmark; and then filtering said detections by a probability threshold. However Saphier teaches applying a fully convolutional U-Net-like architecture to identify dental landmarks in [0369], to obtain a probability distribution over the location of every point of interest ; selecting a location of maximum probability as a detection of a landmark; and then filtering said detections by a probability threshold ([0576], [0580], [0647], [0255], [0258]-[0259], [0295]-[0296], [0347]) to effectively perform the dental landmark detecting with high quality and accuracy utilizing the fully convolutional U-Net-like architecture. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Casas as modified by Saphier and Shin a system and method of wherein the registering and rendering of the 3D imagery onto the video feed comprises applying a fully convolutional U-Net-like architecture, to obtain a probability distribution over the location of every point of interest ; selecting a location of maximum probability as a detection of a landmark; and then filtering said detections by a probability threshold, as suggested by Saphier, to effectively perform the dental landmark detecting with high quality and accuracy utilizing the fully convolutional U-Net-like architecture and thereby increasing system effectiveness and user experience. RE claim 12, Casas teaches A system for aligning 3D images in an extended reality (XR) environment, said system comprising: a processor; a memory element coupled to the processor (Figs 1-2, 6, abstract, [0006], [0183], [0185]); a first module configured to receive 3D images from a volumetric image or surface scan, wherein the volumetric image comprises a three-dimensional voxel array representing an anatomical structure, and the surface scan comprises a polygonal mesh or point cloud of the same anatomical structure (Figs 1-3, [0030]-[0031], [0076], [0078]); a second module configured to detect a set of points on said 3D image, wherein the detection includes identifying distinct anatomical landmarks on the volumetric image or surface scan, such as landmarks for soft tissue or landmarks for hard tissue, and assigning each identified point a unique identifier ([0130], [0089], [0092], [0108]); a third module configured to receive a video feed from XR goggles or a prerecorded video, wherein the video feed provides real-time or prerecorded imagery of the patient's anatomical structure (Figs 1-3, [0031], [0014]); a fourth module configured to register the 3D imagery onto the video feed's coordinates system, frame by frame while keeping temporal consistency; and a fifth module configured to render the 3D imagery onto the video feed based on the video feed coordinates registered (Figs 1-3, [0006], [0009]- [0012], [0016]-[0017], [0032],[0035]-[0038], [0124], [0127] wherein real time tracking, pose update, and blending indicates keeping temporal consistency). Casas is silent RE: facial landmarks or tooth landmarks and the temporal consistency between frames. However Saphier teaches detecting facial landmarks or tooth landmarks in Figs 2A, 3, [0233], [0258]- [0259], [0271] to assist in dental evaluation and treatment planning. In addition Shin teaches keeping temporal consistency across frames in [0044] using correlation response values, for learning correlation filters in object tracking in typical AR applications, which is readily available or can equally be applied to provide the seamless blended video stream in real-time, enhancing user experience Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Casas a system and method of detecting facial landmarks or tooth landmarks for dental evaluation and treatment, as suggested by Saphier, and further ensure temporal consistency across frames as suggested by Shin in order provide the seamless blended video stream in real-time extending the method and system to dentistry and thereby increasing system effectiveness and user experience. Claims 13-18, 23-24 recite limitations similar in scope with limitations of claim 2-3, 6, 9, 7-8, 4-5 respectively and therefore rejected under the same rationale. RE claim 20, Casas as modified by Saphier and Shin teaches further comprising a user interface module configured to allow a clinician to manually select and mark anatomical landmarks on the volumetric image or surface scan (Casas [0087], in addition Saphier [0508]). RE claim 21, Casas teaches further comprising a user interface module configured to allow a clinician to interact with the rendered video feed by hand or voice gesture to make annotations ([0076], [0084]). Claims 10 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Casas as modified by Saphier and Shin, and further in view of Meyer et al (US 20210090272 A1). RE claim 10, Casas as modified by Saphier and Shin is silent RE, wherein the registering and rendering of the 3D imagery onto the video feed comprises detecting corresponding points applying a weighted point-set alignment approach that gives different weights to soft tissue and hard tissue points. However Meyer teaches in Figs 7-8, [0045], [0072] for error free registration separating soft tissue from hard tissue based on the contribution weight. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Casas as modified by Saphier and Shin a system and method of wherein the registering and rendering of the 3D imagery onto the video feed comprises detecting corresponding points applying a weighted point-set alignment approach that gives different weights to soft tissue and hard tissue points, as suggested by Meyer, to obtain error free registration and thereby increasing system effectiveness and user experience. Claim 19 recites limitations similar in scope with limitations of claim 10 and therefore rejected under the same rationale. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Casas as modified by Saphier and Shin, and further in view of Anssari et al (US 20210322136 A1). RE claim 11, Casas as modified by Saphier and Shin teaches further comprising converting Casas as modified by Saphier and Shin is silent RE, converting the mesh/points from the surface scan to a point cloud. However Anssari teaches in [0075], [0098] to obtain the same data representation. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Casas as modified by Saphier and Shin a system and method of and converting the mesh/points from the surface scan to a point cloud, as suggested by Anssari, to perform the point based registration between the 3D images and surface scans and thereby increasing system effectiveness and user experience. Claim 25 is rejected under 35 U.S.C. 103 as being unpatentable over Casas in view of Saphier. RE claim 25, Casas teaches A system for aligning and displaying three-dimensional (3D) objects in an extended reality (XR) environment, said system comprising: a processor; a memory element coupled to the processor, wherein the memory element includes instructions (Figs 1-2, 6, abstract, [0006], [0183], [0185]) that, when executed by the processor, cause the processor to: receive 3D object data from volumetric imagers and surface scanners; obtain a video feed from an XR device or from prerecorded content (Figs 1-3, [0030]-[0031], [0076], [0078]); identify corresponding points between the 3D object data and the video feed, wherein the points are on at least one of soft tissue or on hard tissue ([0130], [0089], [0092], [0108]); align the 3D object data with the video feed based on the identified corresponding points, and display the aligned 3D objects in the XR environment (Figs 1-3, [0006], [0009]- [0012], [0016]-[0017], [0032],[0035]-[0038], [0124], [0127] wherein real time tracking, pose update, and blending indicates keeping temporal consistency). Casas is silent RE: soft tissue such as standard facial landmarks, or on hard tissue, such as teeth; wherein alignment by soft tissue points is used when the patient's mouth is closed, and alignment by hard tissue points is used when the patient's mouth is open. However Saphier teaches in Figs 2A, 3, [0217], [0233], [0258]- [0259], [0226], [0271] etc identifying facial and dental landmarks to assist in dental evaluation and treatment planning, wherein soft tissue points and hard tissue points are visible when the moth is closed and open. This can be equally used in the alignment and displaying accordingly, as readily recognized by one of ordinary skill in the art. Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include in Casas a system and method of detecting standard facial landmarks, or on hard tissue, such as teeth; wherein alignment by soft tissue points is used when the patient's mouth is closed, and alignment by hard tissue points is used when the patient's mouth is open, for dental evaluation and treatment, as set forth above applying Saphier, in order to extend the method and system to dentistry and thereby increasing system effectiveness and user experience. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. (See attached 892) Any inquiry concerning this communication or earlier communications from the examiner should be directed to SULTANA MARCIA ZALALEE whose telephone number is (571)270-1411. The examiner can normally be reached Monday- Friday 8:00am-4:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Sultana M Zalalee/ Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jul 10, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602876
ANNOTATION TOOLS FOR RECONSTRUCTING THREE-DIMENSIONAL ROOF GEOMETRY
2y 5m to grant Granted Apr 14, 2026
Patent 12592035
Fused Bounding Volume Hierarchy for Multiple Levels of Detail
2y 5m to grant Granted Mar 31, 2026
Patent 12586146
PROGRESSIVE MATERIAL CACHING
2y 5m to grant Granted Mar 24, 2026
Patent 12573150
POLYGON CORRECTION METHOD AND APPARATUS, POLYGON GENERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561908
TOPOLOGICALLY CONSISTENT MULTI-VIEW FACE INFERENCE USING VOLUMETRIC SAMPLING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
71%
Grant Probability
86%
With Interview (+15.1%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 488 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month