Prosecution Insights
Last updated: April 19, 2026
Application No. 18/443,690

OBJECT RECONSTRUCTION IN DIGITAL IMAGES

Non-Final OA §103§112
Filed
Feb 16, 2024
Examiner
LANTZ, KARSTEN FOSTER
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Hoffmann-La Roche, Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§103
73.8%
+33.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT/EP2022/072887. Priority to EP21192182.0 with a priority date of 8/19/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 6/03/2024 has been considered and placed in the application file. Claim Objections The inclusion of figure reference characters within the claim text is unnecessary and restricts the scope of the claim to the specific embodiment shown in the drawings. The specific mapping of elements to figures in the current claims causes the claim to be interpreted narrowly based on the drawings, rather than its broadest reasonable interpretation in light of the specification, as required by 35 U.S.C. 112(b). 1st Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim 14 is rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 14 recites “and/or.” It is unclear which combination of options is required or permitted, making it unclear if only one, the other, or both must be true. All claims containing “and/or” will be interpreted using the “or” variant. Claim 3 recites the limitation "the contrast-enhancing tumor, the regions of edema, and the surgical cavity". There is insufficient antecedent basis for this limitation in the claim. 2nd Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obliga+9tion under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 4, 6, 10, and 12 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0030371 A1, (Han) in view of US Patent Publication 2022 0237785 A1, (Mitchell) and US Patent Publication 2019 0328779 A1, (Bae et al.). Claim 1 Regarding Claim 1, Han teach a method for training an artificial neural network, the method comprising the steps of: a) receiving an image wherein the at least one image comprises at least one object of interest; ("The image acquisition device 232 may be configured to acquire one or more images of the patient's anatomy for a region of interest," par. 44) b) receiving a ground-truth pixel-based annotation (104) of the received at least one image, wherein the annotation comprises a ground-truth segmentation mask for the at least one object of interest; ("The 3D ground truth label map may be divided to sequential 2D ground truth label maps, respectively corresponding to the sequential stacks of adjacent 2D images, and pixels of the 2D ground truth label maps are associated with known anatomical structures," par. 59) c) obtaining a predicted segmentation mask (108) by feeding the at least one received image (102) to a prediction function (106), ("The segmentation unit 403 may use at least one trained CNN model received from CNN model training unit 402 to predict the anatomical structure each voxel of a 4D image represents," par. 63) ("The encoding portion 524 of the CNN model 510 may include one or more convolutional layers 528. Each convolutional layer 528 may have a plurality of parameters, such as the width (“W”) and height (“H”) determined by the upper input layer (e.g., the size of the input of convolutional layer 528), and a count of filters or kernels (“N”) in the layer and their sizes," par. 72) d) calculating the loss (112) using a [AltContent: textbox (Figure 7 shows the prediction and training arms workflow of the DCNN. )] PNG media_image1.png 288 396 media_image1.png Greyscale training function (110), when the predicted segmentation mask (106) and the ground-truth segmentation mask of the annotation (104) are given as input to the training function; ("During the training of CNN model 510, the loss layer may determine how the network training penalizes the deviation between the predicted 2D label map and the 2D ground truth label map," par. 81) e) optimizing the model parameters by minimizing the loss (114) with respect to the model parameters; ("The model 700A parameters may be established from training data, such as by minimizing a loss function," par. 88) f) replacing the model parameters with the optimized model parameters (114A) ("The model 700A parameters may be established from training data, such as by minimizing a loss function," par. 88). PNG media_image2.png 560 346 media_image2.png Greyscale Han does not explicitly teach all of at least one image (102) from Magnetic Resonance Imaging brain scan sequences of post-surgical Glioblastoma patients and wherein the prediction function is defined by randomly initialized model parameters (106A). However, Bae et al. teach at least one image (102) from Magnetic Resonance Imaging brain scan sequences of post-surgical Glioblastoma patients ("FIG. 8 shows MRI images after surgery and radiotherapy on the patient with glioblastoma," par. 44). [AltContent: textbox (Figure 8 shows the post-surgery glioblastoma patient MRI images.)]However, Mitchell teaches wherein the prediction function is defined by randomly initialized model parameters (106A) ("The segmenting computer 150 initializes each of the multiple instances of the neural network with respectively randomized weight parameters," par. 90). Therefore, taking the teachings of Han, Mitchell, and Bae et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the segmentation and prediction architecture as taught by Han to use random parameter initialization as taught by Mitchell and post-surgical glioblastoma patient images as taught by Bae et al. At the time the invention was made, it would have been obvious to one of ordinary skill in the art to modify Han’s segmentation and prediction architecture to include Mitchell’s random parameter initialization because such a modification is based on the use of known techniques to improve similar devices in the same way. More specifically, Mitchell’s teaching of initializing instances of a neural network with randomized weights is comparable to Han’s method of establishing model parameters. Therefore, it is within the capabilities of one of ordinary skill in the art to modify Han’s segmentation and prediction architecture to include Mitchell’s random parameter initialization with the predictable result of improving the robustness and accuracy of the segmentation results by reducing overfitting and avoiding local minima. Additionally, the suggestion/motivation for using post-surgical glioblastoma patient images as taught by Bae et al. would have been that, “The patient had no subjective symptom from 2 months of administration of Example 1, and the sizes of remaining tumors were confirmed to decrease after the administration of Example 1. The pre-treatment MRI images of this patient are shown in FIG. 7, and MRI images after the surgery and radiotherapy are shown in FIG. 8. ” as noted by the Bae et al. disclosure in paragraph [0084], which also motivates combination because of the reasonable expectation that post-surgical MRI imaging of glioblastoma patients will confirm treatment efficacy and allow for the assessment of potential complications; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 2 Regarding claim 2, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. Bae et al. teaches wherein the one or more images are obtained from any combination of native T1-weighted, post-contrast T1-weighted, T2-weighted and T2- Fluid Attenuated Inversion Recovery MRI sequences, including single sequences, groups of two sequences, groups of three sequences and/or all four sequences ("FIG. 8 shows MRI images after surgery and radiotherapy on the patient with glioblastoma in Clinical Test-2. FIG. 8A shows T2 weighted SE & FLAIR images and FIG. 8B shows T2 weighted axial SE images," par. 44). Han, Mitchell, and Bae et al. are combined as per claim 1. Claim 4 Regarding claim 4, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. [AltContent: textbox (Figure 4 shows the image segmentation system using multiple CNN models. )] PNG media_image3.png 382 448 media_image3.png Greyscale Han teaches wherein the prediction function is a single ensemble of multiple base models, and wherein the method is performed for each base model ("FIG. 4 illustrates an example including an image segmentation system 400 for segmenting 3D images, such as using first and second CNN models, as mentioned in relation to other examples described in this document," par. 57). Han, Mitchell, and Bae et al. are combined as per claim 1. Claim 6 Regarding claim 6, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. While Han teaches wherein the training is performed on two separate sets of received images, ("a computer-implemented method for segmentation of anatomical features from 3D medical imaging information may include receiving the three-dimensional (3D) medical imaging information defining a first volume, applying a first trained convolutional neural network (CNN) to the three-dimensional medical imaging information, using an output from the first trained CNN determine a region-of-interest within the first volume, the region-of-interest defining a lesser, second volume, applying a different, second trained CNN to the region-of-interest … at least one of the first and second loss functions comprises a cross-entropy loss function, and the second CNN provides enhanced segmentation detail as compared to the first CNN when the first and second CNNs are applied serially to the 3D medical imaging information," par. 6 and 7, wherein additional CNNs processing two more sets of images is an obvious modification to increase segmentation accuracy/detail) wherein the sets are defined based on ranges of the volume distributions ("cause the processor to receive the three-dimensional (3D) medical imaging information defining a first volume, apply a first trained convolutional neural network (CNN) to the three-dimensional medical imaging information," par. 8) of the contrast-enhancing tumor and the regions of edema ("Medical images 246 may include information such as imaging data associated with a patient anatomical region, organ, or volume of interest segmentation data," par. 31). Since there are a finite number of identified, predictable potential solutions to the recognized need (as discussed above) and one of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success. Han, Mitchell, and Bae et al. are combined as per claim 1. Claim 10 Regarding claim 10, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. Han teaches wherein the step e) of optimizing the model parameters by minimizing the loss (114) with respect to the model parameters is performed using a stochastic gradient descent ("Each of the first and second DCNN models (e.g., the 2.5D model and the 3D model) were trained from scratch using a stochastic gradient descent with momentum optimization," par. 95). Han, Mitchell, and Bae et al. are combined as per claim 1. Claim 12 Regarding claim 12, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. Han teaches the use of an artificial neural network model trained according to claim 1 to detect, segment and characterize objects of interest in images ("The segmentation unit 403 may use at least one trained CNN model received from CNN model training unit 402 to predict the anatomical structure each voxel of a 4D image represents," par. 63) ("The image acquisition device 232 may be configured to acquire one or more images of the patient's anatomy for a region of interest," par. 44). Han does not explicitly teach all of images obtained from MRI brain scan sequences of post-surgical Glioblastoma patients, and wherein the objects of interest comprise the contrast-enhancing tumor, the regions of edema, and the surgical cavity. Bae et al. teach images obtained from MRI brain scan sequences of post-surgical Glioblastoma patients, and wherein the objects of interest comprise the contrast-enhancing tumor, the regions of edema, and the surgical cavity ("FIG. 8 shows MRI images after surgery and radiotherapy on the patient with glioblastoma," par. 44). Han, Mitchell, and Bae et al. are combined as per claim 1. 3rd Claim Rejections - 35 USC § 103 Claim 3 is rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0030371 A1, (Han), US Patent Publication 2022 0237785 A1, (Mitchell), and US Patent Publication 2019 0328779 A1, (Bae et al.) in view of US Patent Publication 2020 0364509 A1, (Weinzaepfel et al.). Claim 3 Regarding Claim 3, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. Bae et al. teach wherein the one or more images each comprise multiple objects of interest, including the contrast-enhancing tumor, the regions of edema, and the surgical cavity ("FIG. 8 shows MRI images after surgery and radiotherapy on the patient with glioblastoma," par. 44). Han teaches wherein the training function of step d) is averaged over all objects of interest ("Additionally, the ReLu layer may reduce or avoid saturation during a backpropagation training process," par. 75 wherein backpropagation includes averaging the loss over each training function). Bae et al. and Han do not explicitly teach all of wherein one prediction function per object of interest and one training function per object of interest are implemented. However, Weinzaepfel et al. teach wherein one prediction function per object of interest and one training function per object of interest are implemented ("Branch A, after being processed by a convolution neural network (500 of FIG. 7), a convolution neural network being a neural network composed of interleaving convolution layers and rectified linear units, predicts binary segmentation for each object-of-interest (700 of FIG. 7) and x and y reference image coordinates regression, with an object-of-interest specific regressor (800 and 900, respectively, of FIG. 7). In one embodiment, the convolution neural network predicts segmentation and correspondences on a dense grid of size 56×56 in each box which is then interpolated to obtain a per-pixel prediction," par. 57). Therefore, taking the teachings of Han, Mitchell, Bae et al., and Weinzaepfel et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the segmentation and prediction architecture as taught by Han, random parameter initialization as taught by Mitchell, and post-surgical glioblastoma patient images as taught by Bae et al. to use the one vs rest classifier as taught by Weinzaepfel et al. The suggestion/motivation for doing so would have been that, “Based upon this evaluation, homography data augmentation, at training, allows the generation of more viewpoints of the objects-of-interest, and thus, enabling improved detection and matching of objects-of-interest at test time with novel viewpoints” as noted by the Weinzaepfel et al. disclosure in paragraph [0093], which also motivates combination because the combination would predictably have a higher accuracy as there is a reasonable expectation that more objects of interest will be detected/predicted in the images; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. 4th Claim Rejections - 35 USC § 103 Claims 5, 7, 8, and 11 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0030371 A1, (Han), US Patent Publication 2022 0237785 A1, (Mitchell), and US Patent Publication 2019 0328779 A1, (Bae et al.) in view of Isenee, F. et al., "nnU-Net for Brain Tumor Segmentation" ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, NY:1-15 (Nov 2, 2020) (Submitted by applicant in IDS dated 6/03/2024). Claim 5 Regarding Claim 5, Han, Mitchell, and Bae et al. teach the method of claim 4 as noted above. Han, Mitchell, and Bae et al. do not explicitly teach all of wherein the prediction function is a single ensemble of five confidence-aware nnU-Nets, and wherein the training is performed for each nnU-Net. [AltContent: textbox (Figure 1 shows the five downsampling operations network architecture.)]However, Isenee et al. teach wherein the prediction function is a single PNG media_image4.png 256 674 media_image4.png Greyscale ensemble of five confidence-aware nnU-Nets, and wherein the training is performed for each nnU-Net ("A total of five downsampling operations are performed, resulting in a feature map size 4 x 4 x 4 in the bottleneck," sec. 2.2, pg. 4). Therefore, taking the teachings of Han, Mitchell, Bae et al., and Isenee et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the segmentation and prediction architecture as taught by Han, random parameter initialization as taught by Mitchell, and post-surgical glioblastoma patient images as taught by Bae et al. to use the nnU-Net training architecture as taught by Isenee et al. The suggestion/motivation for doing so would have been that, “Ve recently proposed nnU-Net [25], a general purpose segmentation method that automatically configures segmentation pipelines for arbitrary biomedical datasets. nnU-Net set new state of the art results on the majority of the 23 datasets it was tested on, underlining the effectiveness of this approach” as noted by the Isenee et al. disclosure in section 1, pg. 2, which also motivates combination because the combination would predictably have a more utility for general segmentation as there is a reasonable expectation that nnU-Net architectures automatically configures segmentation pipelines based on dataset properties; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 7 Regarding claim 7, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. Han, Mitchell, and Bae et al. do not explicitly teach all of wherein the training is performed for at least 500 epochs, in particular for 1000 epochs. However, Isenee et al. teach wherein the training is performed for at least 500 epochs, in particular for 1000 epochs ("Training runs for a total of 1000 epochs, where one epoch is defined as 250 iterations," sec. 2.2, pg.4). Han, Mitchell, Bae et al., and Isenee et al. are combined as per claim 5. Claim 8 Regarding claim 8, Han, Mitchell, and Bae et al. teach the method of claim 1 as noted above. Han, Mitchell, and Bae et al. do not explicitly teach all of wherein within one epoch at least 100 batches are processed, in particular 250 batches. However, Isenee et al. teach wherein within one epoch at least 100 batches are processed, in particular 250 batches ("Training runs for a total of 1000 epochs, where one epoch is defined as 250 iterations," sec. 2.2, pg.4). Han, Mitchell, Bae et al., and Isenee et al. are combined as per claim 5. Claim 11 Regarding claim 11, Han, Mitchell, and Bae et al. teach the method of claim 10 as noted above. Isenee et al. teaches wherein the stochastic gradient descent is performed with Nesterov momentum within (0.9,0.99) ("nnU-Net uses stochastic gradient descent with an initial learning rate of 0.01 and a Nesterov momentum of 0.99," sec 2.2, pg. 4). Han, Mitchell, Bae et al., and Isenee et al. are combined as per claim 5. 5th Claim Rejections - 35 USC § 103 Claim 9 is rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0030371 A1, (Han), US Patent Publication 2022 0237785 A1, (Mitchell), and US Patent Publication 2019 0328779 A1, (Bae et al.) in view of Isenee, F. et al., "nnU-Net for Brain Tumor Segmentation" ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, NY:1-15 (Nov 2, 2020) and US Patent Publication 2022 051402 A1, (Dikici et al.). Claim 9 Regarding Claim 9, Han, Mitchell, Bae et al., and Isenee et al. teach the method of claim 8 as noted above. Han, Mitchell, Bae et al., and Isenee et al. do not explicitly teach all of wherein the training is performed applying on the images a random patch scaling within a range (0.7, 1.4), and/or a random rotation, and/or a random gamma correction within a range (0.7,1.5) and/or a random mirroring. However, Dikici et al. teach wherein the training is performed applying on the images a random patch scaling within a range (0.7, 1.4), and/or a random rotation, and/or a random gamma correction within a range (0.7,1.5) and/or a random mirroring ("Each positive sample goes through augmentation process: (B-1) mid-axial slice of an original cropped sample, (B-2) random elastic deformation is applied, (B-3) random gamma correction is applied, (B-4) sample volume is randomly flipped, and (B-5) sample volume is randomly rotated," par. 31). Therefore, taking the teachings of Han, Mitchell, Bae et al., Isenee et al., and Dikici et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the segmentation and prediction architecture as taught by Han, random parameter initialization as taught by Mitchell, post-surgical glioblastoma patient images as taught by Bae et al., and the nnU-Net training architecture as taught by Isenee et al. to use random rotation as taught by Dikici et al. The suggestion/motivation for doing so would have been that, “In the proposed framework, a form of the region-based strategy is introduced; random gamma corrections are applied to cropped volumetric regions during the augmentation stage [38]. Accordingly, the framework (1) does not make any assumptions about the histogram shape or intensity characteristics of given MRI datasets, and (2) avoids losing or corrupting potentially valuable intensity features, which is a common disadvantage of image intensity normalization-based methods” as noted by the Dikici et al. disclosure in paragraph [0108], which also motivates combination because the combination would predictably have a higher accuracy as there is a reasonable expectation that all valuable features of the dataset will be preserved; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. 6th Claim Rejections - 35 USC § 103 Claim 14 is rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2019 0030371 A1, (Han), US Patent Publication 2022 0237785 A1, (Mitchell), and US Patent Publication 2019 0328779 A1, (Bae et al.) in view of US Patent Publication 2021 090257 A1, (Bhatia et al.). Claim 14 Regarding Claim 14, Han, Mitchell, and Bae et al. teach the use of an artificial neural network model according to claim 12. Han, Mitchell, and Bae et al. do not explicitly teach all of wherein the features of the objects of interest extracted comprise volumetric and bidimensional diametrical measurements. However, Bhatia et al. teach wherein the features of the objects of interest extracted comprise volumetric and bidimensional diametrical measurements ("The feature signature f.sub.ROI may comprise numerous features which as a sum characterize the analyzed region of interest ROI. For instance, the feature signature f.sub.ROI may comprise texture features, shape features, intensity/density features, color or grey scale features, size features, structural features, or localization features," par. 176). Therefore, taking the teachings of Han, Mitchell, Bae et al., and Bhatia et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to the segmentation and prediction architecture as taught by Han, random parameter initialization as taught by Mitchell, and post-surgical glioblastoma patient images as taught by Bae et al. to use the particular feature qualities taught by Bhatia et al. The suggestion/motivation for doing so would have been that, “The feature signature f.sub.ROI may comprise numerous features which as a sum characterize the analyzed region of interest” as noted by the Bhatia et al. disclosure in paragraph [0176], which also motivates combination because the combination would predictably have a higher efficiency and accuracy as there is a reasonable expectation that combining precise volumetric/dimensional measurements with a neural network architecture would lead to a robust, automated, and accurate segmentation and classification of the object, reducing manual intervention and measurement error; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2024 0331140 A1 to Prasad et al. discloses receiving (102) imaging data representative of a volume of a subject's anatomy and use a segmentation approach to identify the at least one anatomical feature of interest in the received imaging data. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARSTEN F LANTZ whose telephone number is (571) 272-4564. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Karsten F. Lantz/Examiner, Art Unit 2664 Date: 2/9/2026 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Feb 16, 2024
Application Filed
Feb 17, 2026
Non-Final Rejection — §103, §112
Apr 07, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month