Prosecution Insights
Last updated: April 19, 2026
Application No. 18/058,536

GENERATING AN ABOVE GROUND BIOMASS PREDICTION MODEL

Non-Final OA §103
Filed
Nov 23, 2022
Examiner
DRYDEN, EMMA ELIZABETH
Art Unit
2677
Tech Center
2600 — Communications
Assignee
VENTUREONE - SOLE PROPRIETORSHIP L.L.C.
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
7 granted / 12 resolved
-3.7% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
34 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
9.7%
-30.3% vs TC avg
§103
56.4%
+16.4% vs TC avg
§102
16.6%
-23.4% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Applicant claims priority to provisional application 63/264,569. Claims 1-3, 5-10, 12-15, and 17-20 are supported by the provisional application, but the “synthetic-aperture radar imagery data” of claims 4 and 16 is not. Accordingly, the priority date for claims 4 and 16 is the filing date of application 18/058,536: 11/23/2022. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's RCE submission filed on 03/04/2026 has been entered. Response to Amendment The amendment filed 02/17/2026 has been entered. Applicant’s amendments to the claims have overcome the claim objections and 35 U.S.C. 112 rejections set forth in the Final Office Action dated 12/19/2025. Claims 1-10 and 12-20 remain pending in the application. Response to Arguments Applicant’s arguments, regarding the amendment, have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Objections Claims 3, 5 and 17 are objected to because of the following informalities: In claim 3, “preprocessed training satellite imagery data” should read “preprocessed based on claim 15 language; “the preprocessed training satellite imagery data” lacks antecedent basis). In claims 5 and 17, “the LIDAR data” should read “the training LIDAR data” (the LIDAR data can refer to the corresponding LIDAR data in claims 1/13 that is not used with the model). Appropriate correction is required. Claim Interpretation Regarding claim 9, the digital elevation model stack is interpreted to be digital elevation model data including aspect, roughness, slope, Topographic Position Index, and Terrain Ruggedness Index data (in accordance with the definition provided in paragraph 40 of the specification). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 8, 13, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Naidoo et al. (Naidoo, L., Mathieu, R., Main, R., Kleynhans, W., Wessels, K., Asner, G., & Leblon, B. (2015). Savannah woody structure modelling and mapping using multi-frequency (X-, C-and L-band) Synthetic Aperture Radar data. ISPRS Journal of Photogrammetry and Remote Sensing, 105, 234-250.), hereinafter Naidoo, in view of Li et al. (Li, Y., Li, M., Li, C., & Liu, Z. (2020). Forest aboveground biomass estimation using Landsat 8 and Sentinel-1A data with machine learning algorithms. Scientific reports, 10(1), 9952.), hereinafter Li. Regarding claim 1, Naidoo teaches a method of generating an above ground biomass density map of a target landmass (Naidoo, see AGB in abstract and above ground biomass shown in t/ha units in FIG. 7(i)), the method comprising: receiving training satellite imagery data and training environmental data for the target landmass (Naidoo, SAR images and digital elevation model data, pg 240, section 3.4; last para on pg. 240: “For the modelling process, the SAR frequency datasets were selected as the input (independent) variables”); receiving training Light Detection and Ranging (LIDAR) data comprising biomass density measurements at discrete locations within the target landmass (Naidoo, pg. 236, 2nd para: “Training and validation data were derived from airborne LiDAR data to evaluate the SAR modelling accuracies.”; pg. 239, section 3.3); training an above ground biomass density model using the training satellite imagery data and the training environmental data as inputs to the density model to predict biomass density measurements (Naidoo, model predicts AGB density; abstract: “This study sought to test and compare the accuracy of modelling, in a Random Forest machine learning environment, woody above ground biomass (AGB), canopy cover (CC) and total canopy volume (TCV) in South African savannahs using a combination of X-band (TerraSAR-X), C-band (RADARSAT-2) and L-band (ALOS PALSAR) radar datasets”) corresponding to the training LIDAR data used as training labels (Naidoo, LIDAR data is used for training and validation, see pg. 236, 2nd para citation above and pg. 237, section 3: “The SAR-derived woody structural metrics were then validated using the LiDAR-derived woody structural metrics (CC, TCV and AGB) to ascertain error statistics and error distribution”); and applying the trained above ground biomass density model to satellite imagery data and to environmental data, exclusive of LIDAR-derived data as input to the trained above ground biomass density model (Naidoo, last para on pg. 240: “For the modelling process, the SAR frequency datasets were selected as the input (independent) variables while the LiDAR derived metrics were selected as the target (dependent) variables”), to generate the above ground biomass density map of the target landmass in areas without corresponding LIDAR data (Naidoo, model predicts AGB density, see FIG. 7(i), using input SAR/DEM data – see above citations; see section 3.6 on pg. 241 describing how LIDAR- and SAR-derived results are compared, thus AGB density data is predicted for areas without LIDAR data in an inference phase). Naidoo fails to explicitly teach a non-transitory machine-readable medium having executable instructions to cause one or more processing units to perform the method. However, Li teaches a similar method (Li, abstract: “Landsat 8 Operational Land Imager and Sentinel-1A data and China’s National Forest Continuous Inventory data in combination with three algorithms, either the linear regression (LR), random forest (RF), or the extreme gradient boosting (XGBoost), were used to estimate biomass of the subtropical forests in Hunan Province, China”), including a non-transitory machine-readable medium having executable instructions to cause one or more processing units to perform the method (Li, computer utilized to execute the method – see computational resources referenced in abstract and pg. 4). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the storage and processing units taught by Li with the method of Naidoo in order to store data and execute the method (See citations above). Regarding claim 4 (dependent on claim 1), Naidoo in view of Li teaches wherein the satellite imagery data includes at least one of multispectral satellite imagery data or synthetic- aperture radar imagery data (Naidoo, SAR imagery data – see abstract and sections 3.1 and 3.4). Regarding claim 8 (dependent on claim 1), Naidoo in view of Li teaches wherein the environmental data includes digital elevation model data (Naidoo, digital elevation model data in section 3.4). Regarding claim 13, all claim limitations are met and rendered obvious by Naidoo in view of Li because the method steps of claim 1 are the same as that of claim 13. Regarding claim 16 (dependent on claim 13), all claim limitations are met and rendered obvious by Naidoo in view of Li because the method steps of claim 4 are the same as that of claim 16. Regarding claim 20 (dependent on claim 13), all claim limitations are met and rendered obvious by Naidoo in view of Li because the method steps of claim 8 are the same as that of claim 20. Claims 2 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Naidoo in view of Li, in further view of Haddad et al. (Haddad, O., Abdelfattah, R., & Ajili, H. (2012, July). Extracting radar shadow from SAR images. In 2012 IEEE International Geoscience and Remote Sensing Symposium (pp. 2101-2104). IEEE.), hereinafter Haddad. Regarding claim 2 (dependent on claim 1), Naidoo in view of Li teaches wherein the method further comprises: preprocessing the satellite imagery data (Naidoo, pg. 240, section 3.4: “The SAR intensity images (X-, C- and L-band) were pre-processed according to the following steps: multi-looking, radiometric calibration (conversion of raw digital numbers into sigma naught (r0) backscatter values), geocoding, topographic normalisation of the backscatter and filtering.”), but fails to explicitly teach wherein the preprocessed satellite imagery data includes at least one of cloud masking data, shadow masking data, mosaicking data, or normalized difference spectral index calculation. However, Haddad teaches a method for shadow masking satellite imagery data (Haddad, shadow mask described on pg. 2102 and FIG. 2 attached below). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the shadow masking of SAR imagery data, taught by Haddad, with the method of Naidoo in view of Li in order to identify regions potentially containing distortions before input to the machine learned model (Haddad, pg. 2101, section 1: “The image thus produced contains spatial distortions related to the geometric characteristics and inherent distortions in the acquisition geometry”). For example, shadow datapoints may be omitted from training the model. PNG media_image1.png 518 312 media_image1.png Greyscale Regarding claim 14 (dependent on claim 13), all claim limitations are met and rendered obvious by Naidoo in view of Li and Haddad because the method steps of claim 2 are the same as that of claim 14. Claims 3 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Naidoo in view of Li, in further view of Haddad and Mazza et al. (Mazza, A., Gargiulo, M., Scarpa, G., & Gaetano, R. (2018, July). Estimating the NDVI from SAR by convolutional neural networks. In IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium (pp. 1954-1957). IEEE.), hereinafter Mazza. Regarding claim 3 (dependent on claim 2), Naidoo in view of Li and Haddad fails to explicitly teach wherein the method further comprises: generating a normalized difference spectral index stack from the preprocessed However, Mazza teaches generating a normalized difference spectral index stack from satellite imagery data (Mazza, pg. 1955, 1st para: “More precisely, in this work the designed network is asked to produce normalized difference vegetation index (NDVI)/Sentinel-2 maps from tuples of double polarized (VV-VH) SAR-Sentinel-1 images sensed about the date of the target (cloudy) Sentinel-2 image”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the normalized difference spectral index stack of SAR imagery data, taught by Mazza, with the method of Naidoo in view of Li and Haddad in order to consider vegetation indexes of the captured satellite images, those which are typically absent in SAR data, before input to the machine learned model (Mazza, pg. 1954, section 1: “Vegetation monitoring is critical for analyzing the characteristics of climate, soil, geology, and many other processes of interest. For this reason many vegetation indexes have been defined, among which the NDVI is the most widely and frequently used [1]. Such an index, as well as many others, including those for water, bare soil and so on, is defined as combination of multispectral bands”). For example, NDVI captured indices could help inform which satellite images to input into the machine learned model. Regarding claim 15 (dependent on claim 14), all claim limitations are met and rendered obvious by Naidoo in view of Li, Haddad, and Mazza because the method steps of claim 3 are the same as that of claim 15. Claims 5 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Naidoo in view of Li, in further view of Di Tommaso et al. (Di Tommaso, S., Wang, S., & Lobell, D. B., Combining GEDI and Sentinel-2 for wall-to-wall mapping of tall and short crops, Environmental Research Letters, 2021, 16(12), 125002), hereinafter Tommaso. Regarding claim 5 (dependent on claim 1), Naidoo in view of Li fails to explicitly teach wherein the training LIDAR data includes global ecosystem dynamics investigation LIDAR data. However, Tommaso teaches LIDAR data including global ecosystem dynamics investigation LIDAR data (Tommaso, pg. 1, Abstract: “we explore the use of NASA’s Global Ecosystem Dynamics Investigation (GEDI) spaceborne lidar instrument”; “using GEDI” in section 3.3.3. on pg. 11). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the global ecosystem dynamics investigation LIDAR data of Tommaso with the method of Naidoo in view of Li in order to train the model with accurate landmass data (Tommaso, pg. 2, 3rd paragraph: “GEDI was designed with the goal of improving measures of forest canopy structure, and several recent studies have applied GEDI to this end”; pg. 3, 4th paragraph “GEDI measures could also prove useful in cropland systems. In particular, crop height may be a more consistent feature of crops across regions than the spectral and phenological features detected by passive optical sensors”). Regarding claim 17 (dependent on claim 13), all claim limitations are met and rendered obvious by Naidoo in view of Li and Tommaso because the method steps of claim 5 are the same as that of claim 17. Claims 6-7 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Naidoo in view of Li, in further view of Hu et al. (Hu, T., Su, Y., Xue, B., Liu, J., Zhao, X., Fang, J., & Guo, Q. (2016). Mapping global forest aboveground biomass with spaceborne LiDAR, optical imagery, and forest inventory data. Remote Sensing, 8(7), 565.), hereinafter Hu. Regarding claim 6 (dependent on claim 1), Naidoo in view of Li fails to explicitly teach wherein the environmental data includes climate data. However, Hu teaches a random forest model for estimating AGB density from multisource input (Hu, abstract and FIG 1). Hu teaches wherein the environmental data input includes climate data (Hu, pg. 5, section 2.5 “Climatic Data”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the multisource input, including climate data, of Hu with the satellite data/method of Naidoo in view of Li in order to increase the amount of information informing the model, improving the accuracy of predictions about the target landmass (Hu, pg. 3, 2nd para: “This new product can help to improve the accuracy of predictions of carbon dynamics and quantify the carbon fluxes from deforestation, land cover change, and other disturbances”; last para on pg. 2: “Recently, many studies have tried to integrate multisource data to overcome the deficiencies of GLAS data and estimate regional- to continental-scale forest AGB”). Regarding claim 7 (dependent on claim 1), Naidoo in view of Li fails to explicitly teach wherein the environmental data includes land cover data. However, Hu teaches a random forest model for estimating AGB density from multisource input (Hu, abstract and FIG 1). Hu teaches wherein the environmental data input includes land cover data (Hu, pg. 5-6, section 2.6 “Land Cover Data”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the multisource input, including land cover data, of Hu with the satellite data/method of Naidoo in view of Li in order to increase the amount of information informing the model, improving the accuracy of predictions about the target landmass (Hu, pg. 3, 2nd para: “This new product can help to improve the accuracy of predictions of carbon dynamics and quantify the carbon fluxes from deforestation, land cover change, and other disturbances”; last para on pg. 2: “Recently, many studies have tried to integrate multisource data to overcome the deficiencies of GLAS data and estimate regional- to continental-scale forest AGB”). Regarding claim 18 (dependent on claim 13), all claim limitations are met and rendered obvious by Naidoo in view of Li and Hu because the method steps of claim 6 are the same as that of claim 18. Regarding claim 19 (dependent on claim 13), all claim limitations are met and rendered obvious by Naidoo in view of Li and Hu because the method steps of claim 7 are the same as that of claim 19. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Naidoo in view of Li, in further view of Tan et al. (CN Patent No. 112216052 A), hereinafter Tan, Carreno-Luengo et al. (Carreno-Luengo, H., Luzi, G., & Crosetto, M., Effects of rough topography in GNSS-R: A parametric study based on a digital elevation model, July 2019, In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 8663-8666), hereinafter Luengo, and Hani et al. (Hani, A. F. M., Sathyamoorthy, D., & Asirvadam, V. S., A method for computation of surface roughness of digital elevation model terrains via multiscale analysis, 2011, Computers & Geosciences, 37(2), 177-192), hereinafter Hani. Regarding claim 9 (dependent on claim 8), Naidoo in view of Li teaches further comprising: preprocessing the digital elevation model data (Naidoo, pg. 240, section 3.4: “A 20 m Digital Elevation Model (DEM) and a 90 m Shuttle Radar Topography Mission (SRTM) DEM were both used for the geocoding and orthorectification of the X-, C- and L-band SAR imagery”; steps further detailed in section 3.4); but fails to explicitly teach generating a digital elevation model stack from the preprocessed digital elevation model data. However, Tan teaches generating a digital elevation model stack from preprocessed digital elevation model data (Tan, para 90: “The terrain, slope, aspect and altitude data are obtained from the digital elevation model”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the digital elevation data of Tan with the method of Naidoo in view of Li in order to obtain information and make subsequent predictions about the terrain based on the obtained data (Tan, para 90: “The terrain, slope, aspect and altitude data are obtained from the digital elevation model. Then, combined with ground meteorological station data (including relative humidity, precipitation, wind speed, etc.), prediction parameters related to fire risk are obtained from satellite images”). Additionally (see Claim Interpretation section regarding claim 9 limitations), Luengo teaches generating Topographic Position Index and Terrain Ruggedness Index data from digital elevation model data (Luengo, pg. 8664, section 3.1 in the bottom left: “the DEM products of 250 m from the Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010) are used to derive the following topographic descriptors”, see Terrain Ruggedness Index, Topographic Position Index, and slope listed as descriptors). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the digital elevation data of Luengo with the method of Naidoo in view of Li in order to further identify properties of the terrain (Luengo, pg. 8664, last sentence continued on the next pg: “These parameters provide a better understanding of the properties of the terrain than raw DEM data. TRI, TPI provide different descriptions of the topographic heterogeneity. The curvature is described based on changes of slope”). Further (see Claim Interpretation section regarding claim 9 limitations), Hani teaches generating roughness data from digital elevation model data (Hani, pg. 180, 2nd paragraph: “This paper proposes an algorithm to compute surface roughness of digital elevation model (DEM) terrains”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the digital elevation data of Hani with the method of Naidoo in view of Li in order to quantify further aspects of the terrain (Hani, pg. 189, last sentence continued on the next pg: “The algorithm allows for a good quantification of a region’s convexity/concavity over varying scales, distinguishing between shallow and deep incisions, and hence, provides an accurate surface roughness parameter”). Claims 10 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Naidoo in view of Li, in further view of Shaw (Shaw, Bradley S. Seeing Numbers: Bayesian Optimisation of a LightGBM Model. Medium, 5 Aug 2021 [online], [retrieved on 2026-03-25]. Retrieved from the Internet <URL: https://medium.com/data-science/seeing-numbers-bayesian-optimisation-of-a-lightgbm-model-3642228127b3>). Regarding claim 10 (dependent on claim 1), Naidoo in view of Li teaches outputting the trained above ground biomass density model (Naidoo, trained and validated model used for modelling, see 2nd para on pg. 236). Additionally, Li teaches training an above ground biomass density model using a gradient boosted machine learning algorithm (abstract: “The combination of Landsat 8 and Sentinel-1A images as predictor variables in the XGBoost model provided the best AGB estimation”; see Mg/ha density values in “Results”, pg. 4). Li demonstrates the efficacy of this model based on SAR image input (Li, use of Sentinel-1 data, see SAR data in “Satellite data” on pg. 3), similar to that of Naidoo. Li further demonstrates the advantages of using a gradient boosted model instead of a random forest model (Li, pg. 9, 4th para: “The result indicated that the XGBoost model worked better than RF model”; abstract: “The XGBoost model is an effective method for AGB estimation and can reduce the problems of overestimation and underestimation.”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the gradient boosted model (GBM) of Li with the method of Naidoo in order to predict more accurate AGB values (pg. 10, “Conclusions”: “Although there are still problems of high value underestimation and low value overestimation for these two algorithms, the XGBoost algorithm reduced this problem to a certain extent and made the AGB estimation results closer to the sample survey results”) and improve algorithm speed (Li, pg. 4, 2nd para: “XGBoost is a flexible and highly scalable tree structure enhancement model that can handle sparse data, greatly improve algorithm speed, and reduce computational memory in very large-scale data training”). Furthermore, while Naidoo in view of Li fails to teach a light GBM, Shaw teaches the use of a light gradient boosted machine learning algorithm (Shaw, pg. 3, 1st para: “LightGBM is a gradient boosting framework which uses tree-based learning algorithms”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the light gradient boosted machine learning algorithm of Shaw with the model of Naidoo in view of Li in order to increase the speed of the gradient boosted algorithm (Shaw, pg. 3, 2nd para: “LightGBM can significantly outperform XGBoost and SGB in terms of computational speed and memory consumption”). Regarding claim 12 (dependent on claim 10), Naidoo in view of Li and Shaw teaches wherein the training further includes using Bayesian optimization (Shaw, pg. 1: “Tuning the hyper-parameters of a LightGBM model using Bayesian optimisation”; see also pg. 11). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: (1) Wessels, K., Mathieu, R., Knox, N., Main, R., Naidoo, L., & Steenkamp, K. (2019). Mapping and monitoring fractional woody vegetation cover in the arid savannas of Namibia using LiDAR training data, machine learning, and ALOS PALSAR data. Remote Sensing, 11(22), 2633. PNG media_image2.png 475 734 media_image2.png Greyscale (2) Urbazaev, M., Thiel, C., Cremer, F., Dubayah, R., Migliavacca, M., Reichstein, M., & Schmullius, C. (2018). Estimation of forest aboveground biomass and uncertainties by integration of field measurements, airborne LiDAR, and SAR and optical satellite data in Mexico. Carbon balance and management, 13(1), 5. PNG media_image3.png 292 655 media_image3.png Greyscale Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /EMMA E DRYDEN/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Nov 23, 2022
Application Filed
May 16, 2025
Non-Final Rejection — §103
Nov 19, 2025
Response Filed
Dec 16, 2025
Final Rejection — §103
Feb 17, 2026
Response after Non-Final Action
Mar 04, 2026
Request for Continued Examination
Mar 06, 2026
Response after Non-Final Action
Mar 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561873
IMAGE PROCESSING APPARATUS AND METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12543950
SLIT LAMP MICROSCOPE, OPHTHALMIC INFORMATION PROCESSING APPARATUS, OPHTHALMIC SYSTEM, METHOD OF CONTROLLING SLIT LAMP MICROSCOPE, AND RECORDING MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12526379
AUTOMATIC IMAGE ORIENTATION VIA ZONE DETECTION
2y 5m to grant Granted Jan 13, 2026
Patent 12340443
METHOD AND APPARATUS FOR ACCELERATED ACQUISITION AND ARTIFACT REDUCTION OF UNDERSAMPLED MRI USING A K-SPACE TRANSFORMER NETWORK
2y 5m to grant Granted Jun 24, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month