Prosecution Insights
Last updated: April 19, 2026
Application No. 18/714,185

X-RAY DISSECTOGRAPHY

Non-Final OA §103§112
Filed
May 29, 2024
Examiner
AHMAD, NAUMAN UDDIN
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Rensselaer Polytechnic Institute
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
28 granted / 36 resolved
+15.8% vs TC avg
Strong +20% interview lift
Without
With
+19.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
67
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
15.8%
-24.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 36 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: dissectography module… in claim 1; input module…in claims 1, 8 and 14; intermediate module…in claims 1, 8 and 14; output module…in claims 1, 8 and 14. Claim 1, “dissectography module for dissecting” The corresponding structure in the disclosure for performing the claimed module function of dissecting, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “dissectography module… for…” is a app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Claims 1 and 14, “input module configured to receive… and to generate…” The corresponding structure in the disclosure for performing the claimed module function of receive and generate, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “input module… configured to…” is an app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Claims 1 and 14, “intermediate module configured to generate…” The corresponding structure in the disclosure for performing the claimed module function of generate, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “intermediate module… configured to…” is an app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Claims 1 and 14, “output module configured to generate…” The corresponding structure in the disclosure for performing the claimed module function of generate, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “output module… configured to…” is a app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Claim 8, “generating, by the input module…” The corresponding structure in the disclosure for performing the claimed module function of generating, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “output module… configured to…” is a app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Claim 8, “generating, by an intermediate module,…” The corresponding structure in the disclosure for performing the claimed module function of receive and generate, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “input module… configured to…” is an app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Claim 8, “generating, by an output module…” The corresponding structure in the disclosure for performing the claimed module function of generating, “"module" may refer to an app, software, firmware and/or circuitry configured to perform any of the aforementioned operations” (page 18). Therefore, the interpretation of the “intermediate module… configured to…” is an app and/or firmware which may include circuitry programmed with the corresponding algorithm(s)/software to perform the respective functions and equivalents thereof. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1- 20 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1, 8, 14 and 20 recites the limitation "wherein dissecting corresponds to extracting" the in second to last line. There is insufficient antecedent basis for this limitation in the claim. This is because line one of each claim also recites “dissecting”, so it is unclear whether this second instance of dissecting is referring to the first instance or a new instance of dissecting. Claims 2-7, 9-13 and 15-19 rejected under 35 U.S.C. 112(b) since they depend on a claim that is rejected under rejected under 35 U.S.C. 112(b). Note. Most likely these claims depend on some dependent claim or are missing elements. In order to fix this issue, dependency should be reviewed and any first instance of an element should be made clear that it’s a first instance and should be referred to as “a” or “an” instead of “the”, and if multiple instances exist, further instances should be further distinguished for example by saying “first”, “second”, and/or “third” etc. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, 8, 12, 14, 18 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Aubert et al. (U.S. Patent Application Publication No. 2023/0177748), hereinafter referenced as Aubert, in view of Nie et al. (U.S. Patent Application Publication No. 2019/0139223), hereinafter referenced as Nie. Regarding claim 1, Aubert teaches a dissectography module for dissecting a two-dimensional (2D) radiograph, the dissectography module comprising: (paragraph 23 teaches "differentiate said first anatomical structure from said second anatomical structure, and convert a real x-ray image into at least two digitally reconstructed radiographs (DRR)." and paragraphs 121-123 teaches "x-ray image 16 is converted by a GAN 17 into a converted global DRR 5 which, in turn, is segmented by DI2I CNN 18 into: a first binary mask 12 of this converted global DRR 5, corresponding to the segmentation 14 of the first organ 3, a second binary mask 13 of this converted global DRR 5, corresponding to the segmentation 15 of second organ 4."); this shows x-ray image (2D radiograph) being dissected/segmented by DI2I CNN which is acting as dissectography module when implemented on a computer as one of ordinary skill in the art would understand it is implemented on; wherein dissecting corresponds to extracting a region of interest from the 2D input radiographs while suppressing one or more other structure(s) (paragraph 16 teaches "automatically converting: at least one or more real x-ray images of a patient, including at least a first anatomical structure of said patient and a second anatomical structure of said patient, into at least one digitally reconstructed radiograph (DRR) of said patient representing said first anatomical structure without representing said second anatomical structure"); first anatomical structure is the region of interest, this is from x-ray image (which is 2D input radiograph(s)), and this dissects by reconstructing and keeping first anatomical structure not second which shows suppressing second anatomical structure. However, Aubert fails to teach an input module configured to receive a number K of 2D input radiographs, and to generate at least one three-dimensional (3D) input feature set, and K 2D input feature sets based, at least in part, on the K 2D input radiographs; an intermediate module configured to generate a 3D intermediate feature set based, at least in part, on the at least one 3D input feature set; and an output module configured to generate output image data based, at least in part, on the K 2D input feature sets, and the 3D intermediate feature set, However, Nie teaches an input module configured to receive a number K of 2D input radiographs, (Nie, paragraph 93 teaches "acquiring image data may be performed by the image data acquisition module 302. In some embodiments, the image data may be one or more two-dimensional slice images acquired by scanning an object. The scanned object may be an organ, a body, a substance, an injured part, a tumor, or the like, or any combination thereof"); image data acquisition module acts as input module and this receives a number k (one or more) of 2D input radiographs / scanned images; and to generate at least one three-dimensional (3D) input feature set, and K 2D input feature sets based, at least in part, on the K 2D input radiographs (Nie, fig. 6 step 601 teaches determining two ROIs (region of interests), step 605 teaches volume of interest (VOI) based on the ROIs, and paragraph 103 teaches "In 601, at least two ROIs may be determined…at least two ROIs may include regions corresponding to the same three-dimensional target of interest in different two-dimensional slice images. For instance, a first ROI may be determined in a first two-dimensional slice image, and a second ROI may be determined in a second two-dimensional slice image...user may desire to determine a tumor in the liver, and the ROI determination unit 402 may determine tumor regions displayed in a plurality of two-dimensional slice images"); this shows 3D input feature set / volume (VOI) of a feature (such as tumor of liver example given) being generated as well as K 2D input feature sets / two ROIs of features (2 is K since from above "one or more" would include two scanned images thus based on 2 2D input radiographs); an intermediate module configured to generate a 3D intermediate feature set based, at least in part, on the at least one 3D input feature set (Nie, fig. 6, step 607 teaches optimizing the VOI and paragraph 112 teaches "operation of optimizing the generated VOI may be performed by the editing unit 410...optimization may include adjusting the volume scope included by the VOI contour surface. For example, during the process of extracting a tumor, the scope of the VOI contour surface may be adjusted to allow the VOI to include the entire tumor region as much as possible"); this volume update/optimization in step 607 is based on the VOI / 3D input feature set since it comes after generating VOI in step 605, thus this optimized VOI acts as 3D intermediate feature set and editing unit 410 here acts as intermediate module; and an output module configured to generate output image data based, at least in part, on the K 2D input feature sets, and the 3D intermediate feature set, (Nie, fig. 7, step 714 and paragraph 129 teaches "In 714, the at least two ROIs and the VOI may be displayed synchronously. Operation 714 may be performed by the display module 310"); display module acts as output module, generating VOI based on ROIs in step 710 corresponds to fig. 6 step 605, thus the editing VOI in step 712 corresponds to the optimizing VOI in step 607, therefore the displaying/output of step 714 is based on the previous steps inclusive of being based on the 2 2D input feature sets (2 ROIs) and 3D intermediate feature set (optimized VOI). Nie is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of taking medical images and processing the images to segment as well as emphasize important portions of the images. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Aubert's invention with the number of radiographs and feature set techniques of Nie to enhance the target of interest (Nie, paragraph 104). This would be done by having the region of interest. Regarding claim 5, the combination of Aubert and Nie teaches wherein the number K is equal to two, and the output image data corresponds to two (Nie, paragraph 99 teaches "the determination of the VOI may include processing the image data, reconstruct a three-dimensional image to implement stereo display,"); as mentioned in claim 1, two scanned images / 2d input radiographs are used and two ROIs / 2D input feature sets are generated thus k is two as aforementioned, and this stereo display shows two dissected radiographs would be output with left and right eye for steroscopy which one of ordinary skill in the art would understand is commonly used as a pair in 3D glasses. The same motivations used in claim 1 apply here in claim 5. Regarding claim 8, the method claim 8 recites similar limitations as product/module claim 1, and thus is rejected under similar rationale. Regarding claim 12, the method claim 12 recites similar limitations as product/module claim 5, and thus is rejected under similar rationale. Regarding claim 14, the system claim 14 recites similar limitations as product/module claim 1, and thus is rejected under similar rationale. In addition Nie, fig. 2 shows system 200 with processor 202 on computing device 216, memory 206, disk/data store 212 and input/output circuitry 210. Regarding claim 18, the system claim 18 recites similar limitations as product/module claim 5, and thus is rejected under similar rationale. Regarding claim 20, the computer readable storage device claim 20 recites similar limitations as method claim 8, and thus is rejected under similar rationale. In addition, Nie, paragraph 192 teaches “one or more computer readable media having computer readable program code embodied thereon.” And fig. 2, shows processor 202. Claim(s) 2-3, 6-7, 9-10, 13, 15-16 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Aubert and Nie as applied to claim 1, 8 and 14 above, and further in view of Chen et al. (U.S. Patent Application Publication No. 2021/0056703), hereinafter referenced as Chen. Regarding claim 2, the combination of Aubert and Nie fails to explicitly teach wherein the input module, the intermediate module and the output module each comprise an artificial neural network (ANN). Although Aubert, paragraph 23 teaches "a group of convolutional neural networks (CNN) which is preliminarily trained to, both or simultaneously: differentiate said first anatomical structure from said second anatomical structure," However, Chen explicitly teaches wherein the input module, the intermediate module and the output module each comprise an artificial neural network (ANN) (Chen, abstract teaches "the system inputs the CT slice into a plurality of branches of a trained segmentation block. Each branch of the segmentation block includes a convolutional neural network (CNN)"); CNN is a type of ANN and each branch of segmentation block containing it in segmentation means all three of the listed modules (since act as branch of segmentation block when viewed in combination) would each contain CNN/ANN. Chen is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of radiographs and segmentations of such using CNN and ANN. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Aubert and Nie with the CNN techniques of Chen to more accurately and consistently generate three-dimensional segmentations (Chen, paragraph 4). This would be by having CNNs. Regarding claim 3, the combination of Aubert, Nie and Chen teaches wherein the input module comprises K input 2D artificial neural networks (ANNs), and the output module comprises K output 2D ANNs, (Chen, paragraph 32 teaches "segmentation block 215 includes two or more branches of CNN (e.g., two, three, four, five, or ten branches, etc.)" and abstract teaches "Each branch of the segmentation block includes a convolutional neural network (CNN)" ); standard CNN is 2D and this shows K (two) branches with each having a CNN/ANN of which must have a input and output for the segmentation block; each input 2D ANN is configured to receive a respective 2D input radiograph and to generate a respective 2D input feature set, (Chen, paragraph 51 teaches "each slice for at least a subset of the set of CT slices is input 605 into a plurality of branches of a trained segmentation block 215. Each branch includes a CNN with convolution filters at a different scale, and each branch produces one or more levels of output. Feature maps are generated 610 for each of the levels of output based on a combination of same-level outputs across each of the branches); feature map generated for each output shows respective 2d input feature set and this is done for each slice (each received 2d input radiograph by input 2d ANN); and each output 2D ANN is configured to receive a respective 2D intermediate feature set and to generate a respective dissected view (Chen, paragraph 51 teaches "two-dimensional segmentation of the slice is generated 615 based on the feature maps of each of the levels of output."); segmentation is the respective dissected view generated and being generated based on feature maps of output shows the output 2D ANN configured to receive feature maps/intermediate feature sets to do so. The same motivations used in claim 2 apply here in claim 3. Regarding claim 6, the combination of Aubert, Nie and Chen teaches wherein the input module corresponds to a back projection module, (Nie, paragraph 70 teachs "image data acquisition module 302 may obtain image data. The image may be a medical image. The medical image may include a CT image"); CT image means image data acquisition module can be CT scanner and a CT scanner has back projection thus the input / image data acquisition module corresponds back projection module; the intermediate module corresponds to a 3D fusion module (Chen, paragraph 6 teaches "aggregating the segmentations of each slice in the subset of CT slices to generate a three-dimensional segmentation of the lesion"); this shows forming unified 3D representation by aggregating segmentations thus acts as a 3D fusion and since it is for generating 3D segmentation, it would be done by the intermediate module of Nie which shows that module also corresponds to a 3D diffusion module; and the output module corresponds to a projection module (Nie, paragraph 159 teaches "virtual projection line may pass through the VOI region of two-dimensional slice image sequence(s) at a pre-determined angle, a two-dimensional projection may be performed on voxels in different slice images on the same projection line, then the voxels in different slice images may be comprehensively displayed based on a virtual illumination effect"); projection performed for display/output shows output module corresponds to a projection module. The same motivations used in claim 2 apply here in claim 6. Regarding claim 7, the combination of Aubert, Nie and Chen teaches wherein each ANN is a convolutional neural network (Chen, abstract teaches "the system inputs the CT slice into a plurality of branches of a trained segmentation block. Each branch of the segmentation block includes a convolutional neural network (CNN)"); CNN is a type of ANN thus each ANN here is CNN. The same motivations used in claim 2 apply here in claim 7. Regarding claim 9, the method claim 9 recites similar limitations as product/module claim 2, and thus is rejected under similar rationale. Regarding claim 10, the method claim 10 recites similar limitations as product/module claim 3, and thus is rejected under similar rationale. Regarding claim 13, the method claim 13 recites similar limitations as product/module claim 6, and thus is rejected under similar rationale. Regarding claim 15, the system claim 15 recites similar limitations as product/module claim 2, and thus is rejected under similar rationale. Regarding claim 16, the system claim 16 recites similar limitations as product/module claim 3, and thus is rejected under similar rationale. Regarding claim 19, the system claim 19 recites similar limitations as product/module claim 6, and thus is rejected under similar rationale. Claim(s) 4, 11 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable the combination of Aubert and Nie as applied to claim 1, 8 and 14 above, and further in view of Min et al. (U.S. Patent Application Publication No. 2018/0124331), hereinafter referenced as Min. Regarding claim 4, the combination of Aubert and Nie fails to explicitly teach wherein the intermediate module comprises a 3D ANN configured to generate the 3D intermediate feature set. However, Min teaches wherein the intermediate module comprises a 3D ANN configured to generate the 3D intermediate feature set (Min, paragraph 7 teaches "applying a three-dimensional Convolutional Neural Network (C3D) to image frames of the video sequence to obtain, for the video sequence, (i) intermediate feature representations across L convolutional layers"); this shows 3D ANN/CNN to generate intermediate feature representations (means using the intermediate module and thus intermediate module from Nie must comprise such). Min is considered to be analogous art because it is reasonably pertinent to the problem faced by the inventor of generating feature representations using ANN. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Aubert and Nie with the ANN intermediate feature generation techniques of Min to model important coarse or fine-grained spatiotemporal structures and ensure the model adaptively attends to different locations within the feature maps at particular layers (Min, paragraph 18). This means better quality. Regarding claim 11, the method claim 11 recites similar limitations as product/module claim 4, and thus is rejected under similar rationale. Regarding claim 17, the system claim 17 recites similar limitations as product/module claim 4, and thus is rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Dror et al. (U.S. Patent Application Publication No. 20170039725) fig. 4 step 401 teaches input radiograph/image, steps 404-410 teach a region of interest and segmentation/dissection of such and step 416 teaches the output of image data. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NAUMAN U AHMAD whose telephone number is (703)756-5306. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571) 272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /N.U.A./Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

May 29, 2024
Application Filed
Feb 19, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592036
BLENDING ELEVATION DATA INTO A SEAMLESS HEIGHTFIELD
2y 5m to grant Granted Mar 31, 2026
Patent 12530807
METHODS AND SYSTEMS FOR COMPRESSING DIGITAL ELEVATION MODEL DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12518472
DEFORMABLE NEURAL RADIANCE FIELDS
2y 5m to grant Granted Jan 06, 2026
Patent 12518482
VIRTUAL REPRESENTATIVE CONDITIONING SYSTEM
2y 5m to grant Granted Jan 06, 2026
Patent 12505601
CONTENT DISPLAY CONTROL DEVICE, CONTENT DISPLAY CONTROL METHOD, AND STORAGE MEDIUM STORING CONTENT DISPLAY CONTROL PROGRAM
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
98%
With Interview (+19.8%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 36 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month