Prosecution Insights
Last updated: April 19, 2026
Application No. 18/517,364

PROCESSING TECHNIQUES FOR GENERATING, TRACKING, AND VISUALIZING ENVIRONMENTAL INSIGHTS

Non-Final OA §102§103
Filed
Nov 22, 2023
Examiner
ZUBERI, MOHAMMED H
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
Lemu Global Limited
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 1m
To Grant
98%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
306 granted / 437 resolved
+15.0% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
23 currently pending
Career history
460
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
20.8%
-19.2% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This action is responsive to patent application as filed on 11/22/2023 This action is made Non-Final. Claims 1 – 20 are pending in the case. Claims 1, 14, and 20 are independent claims. Information Disclosure Statement The information disclosure statement (IDS) submitted on 3/1/2024, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Drawings The drawings filed on 11/22/2023 have been accepted by the Examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 10 and 14 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Bauer (USPUB 20230049590 A1). Claim 1: Bauer discloses A geographic prediction system comprising memory and one or more processors communicatively coupled to the memory, the one or more processors configured to: receive one or more hyperspectral image frames corresponding to at least a portion of a geographic region (0142 and 0186: The hyperspectral images 106 are used as second training images... the model and optionally further software modules, for example the feature extraction module 128, are stored on a computer system 120 comprising or configured to receive one or more test images); generate, using one or more machine learning models, one or more landcover predictions for the geographic region based at least in part on the one or more hyperspectral image frames, wherein: (i) the one or more landcover predictions correspond to one or more geographic portions within the geographic region (0068: This may be advantageous as the high information density in respect to spatial feature information comprises in the first training images may allow the ML-model to learn during the training to correlate spatial image features extracted in the form of first features from the first training images with spatially aligned labels having been predicted based on spectral information comprised in the second training images. Hence, the ML-program has “learned” during the training to predict the labels based on image features which are present only in the first training images but which are absent from (or have a reduced occurrence in) the second training images. Hence, according to embodiments, low-resolution hyperspectral images with high amount of spectral information are used to automatically predict labels which are then automatically aligned to high-resolution images with lower spectral information) and (ii) the one or more machine learning models are trained using at least a plurality of historical hyperspectral image frames that correspond to one or more labeled geographic portions within one or more geographic regions(Figs 1B and 3, 0145 and 0160: A label prediction modules 116 is configured to receive the extracted second features as input and to compute one or more labels for each second training image 106. For example, the label prediction module 118 can comprise a repository comprising a plurality of reference spectral signatures. Each reference spectral signature is descriptive of the spectral signature characteristic for a particular type of object. For example, the repository can comprise a hyperspectral reference signature characteristic for plain soil, a hyperspectral reference signature characteristic for healthy sugar beet plants, a hyperspectral reference signature characteristic for sugar beet plants infected with Cercospora, a hyperspectral reference signature characteristic for a 50:50 mixture of healthy and Cercospora-infected sugar beet plants, etc... the lower part of FIG. 3 illustrates the labels obtained for the same test field based on a hyperspectral camera and a label prediction software that uses hyperspectral signatures for predicting the labels. The hyperspectral camera 102 is used for acquiring a hyperspectral image 302 that depicts the same agricultural area as depicted in the test image 108. A comparison of the RGB test image 108 and the hyperspectral test image 302 reveals that both images depict the same agricultural area. Of course, the spectral information outside of the visible spectral range that is comprised in the hyperspectral image 302 cannot be illustrated here. By applying the feature extraction module 1144 extracting second features 116 in the form of spectral signatures and by comparing the extracted spectral signatures of each pixel with respective reference spectral signatures, pixel specific labels can be computed by the label prediction module 118 as described before. By performing an image segmentation step based on the said labels, the labeled and segmented hyperspectral image 304 is generated. A comparison of the two labeled images 206, 304 reveals that the trained ML-model is able to predict the type and position of labels with basically the same accuracy as the label prediction module 118 that uses hyperspectral data as input... By using hyperspectral images only at training time but using RGB images for performing automated labeling at test time, the costs and effort associated with using hyperspectral cameras only occurred during the training phase, not during the test phase); and initiate, through an interactive user interface, a presentation of the one or more landcover predictions to a user (0152 and 0189: the labeled and segmented image 110 can be output to a user via a screen... the predicted labels can be used in a segmentation step for computing a segmented image which is shown to a user via a screen). Claim 10: Bauer discloses the one or more processors are further configured to: receive, through the interactive user interface, user input indicative of a landcover label for a geographic portion within the geographic region; and in response to the user input, update the one or more labeled geographic portions within the one or more geographic regions (0018: Bauer discusses the existing steps for assigning labels to a digital input image including “The manual creation of annotated training data is a highly time consuming, expensive and error prone process. Using information comprised in information-rich digital images acquired with a second, often complex and/or expansive image acquisition technique for automatically predicting the labels, and training a ML model on pairs of aligned first and second trainings images may have the advantage that a trained ML-model is generated that is adapted to automatically predict and generate those labels also based on first images having been acquired by a comparatively cheap/low-complexity first image acquisition technique. This may allow avoiding tedious and biased manual labeling). Claim 14: Bauer discloses A computer-implemented method, the computer-implemented method comprising: receiving, by one or more processors, one or more hyperspectral image frames corresponding to at least a portion of a geographic region (0142 and 0186: The hyperspectral images 106 are used as second training images... the model and optionally further software modules, for example the feature extraction module 128, are stored on a computer system 120 comprising or configured to receive one or more test images); generating, by the one or more processors and using one or more machine learning models, one or more landcover predictions for the geographic region based at least in part on the one or more hyperspectral image frames, wherein: (i) the one or more landcover predictions correspond to one or more geographic portions within the geographic region (0068: This may be advantageous as the high information density in respect to spatial feature information comprises in the first training images may allow the ML-model to learn during the training to correlate spatial image features extracted in the form of first features from the first training images with spatially aligned labels having been predicted based on spectral information comprised in the second training images. Hence, the ML-program has “learned” during the training to predict the labels based on image features which are present only in the first training images but which are absent from (or have a reduced occurrence in) the second training images. Hence, according to embodiments, low-resolution hyperspectral images with high amount of spectral information are used to automatically predict labels which are then automatically aligned to high-resolution images with lower spectral information) and (ii) the one or more machine learning models are trained using at least a plurality of historical hyperspectral image frames that correspond to one or more labeled geographic portions within one or more geographic regions (Figs 1B and 3, 0145 and 0160: A label prediction modules 116 is configured to receive the extracted second features as input and to compute one or more labels for each second training image 106. For example, the label prediction module 118 can comprise a repository comprising a plurality of reference spectral signatures. Each reference spectral signature is descriptive of the spectral signature characteristic for a particular type of object. For example, the repository can comprise a hyperspectral reference signature characteristic for plain soil, a hyperspectral reference signature characteristic for healthy sugar beet plants, a hyperspectral reference signature characteristic for sugar beet plants infected with Cercospora, a hyperspectral reference signature characteristic for a 50:50 mixture of healthy and Cercospora-infected sugar beet plants, etc... the lower part of FIG. 3 illustrates the labels obtained for the same test field based on a hyperspectral camera and a label prediction software that uses hyperspectral signatures for predicting the labels. The hyperspectral camera 102 is used for acquiring a hyperspectral image 302 that depicts the same agricultural area as depicted in the test image 108. A comparison of the RGB test image 108 and the hyperspectral test image 302 reveals that both images depict the same agricultural area. Of course, the spectral information outside of the visible spectral range that is comprised in the hyperspectral image 302 cannot be illustrated here. By applying the feature extraction module 1144 extracting second features 116 in the form of spectral signatures and by comparing the extracted spectral signatures of each pixel with respective reference spectral signatures, pixel specific labels can be computed by the label prediction module 118 as described before. By performing an image segmentation step based on the said labels, the labeled and segmented hyperspectral image 304 is generated. A comparison of the two labeled images 206, 304 reveals that the trained ML-model is able to predict the type and position of labels with basically the same accuracy as the label prediction module 118 that uses hyperspectral data as input... By using hyperspectral images only at training time but using RGB images for performing automated labeling at test time, the costs and effort associated with using hyperspectral cameras only occurred during the training phase, not during the test phase); and initiating, by the one or more processors and through an interactive user interface, a presentation of the one or more landcover predictions to a user (0152 and 0189: the labeled and segmented image 110 can be output to a user via a screen... the predicted labels can be used in a segmentation step for computing a segmented image which is shown to a user via a screen). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 3-6 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bauer in view of Xiang (USPUB 20170090068 A1). Claims 3 and 16: Bauer discloses every feature of claims 1 and 14. Bauer, by itself, does not seem to completely teach the interactive user interface comprises one or more selectable overlay icons and the presentation of the one or more landcover predictions is initiated in response to a selection of at least one of the one or more selectable overlay icons. The Examiner maintains that these features were previously well-known as taught by Xiang. Xiang teaches the interactive user interface comprises one or more selectable overlay icons and the presentation of the one or more landcover predictions is initiated in response to a selection of at least one of the one or more selectable overlay icons (0054-55: the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers... model and field data is stored in model and field data repository 160. Model data comprises data models created for one or more fields). Bauer and Xiang are analogous art because they are from the same problem-solving area, making geographic predictions using hyperspectral imaging. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bauer and Xiang before him or her, to combine the teachings of Bauer and Xiang. The rationale for doing so would have been to provide a user the ability to easily specify which part of a hyperspectral image to make predictions regarding. Therefore, it would have been obvious to combine Bauer and Xiang to obtain the invention as specified in the instant claim(s). Claims 4 and 17: Bauer, by itself, does not seem to completely teach a geographic portion of the geographic region comprises a geographic polygon or a georeferenced datapoint within the geographic region. The Examiner maintains that these features were previously well-known as taught by Xiang. Xiang teaches a geographic portion of the geographic region comprises a geographic polygon or a georeferenced datapoint within the geographic region (0054-55: the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers... model and field data is stored in model and field data repository 160. Model data comprises data models created for one or more fields). Bauer and Xiang are analogous art because they are from the same problem-solving area, making geographic predictions using hyperspectral imaging. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bauer and Xiang before him or her, to combine the teachings of Bauer and Xiang. The rationale for doing so would have been to provide geographic locations for a user. Therefore, it would have been obvious to combine Bauer and Xiang to obtain the invention as specified in the instant claim(s). Claims 5 and 18: Bauer, by itself, does not seem to completely teach the geographic polygon comprises a closed geographic area within the geographic region and a landcover prediction for the geographic polygon is indicative of an object class physically located within the closed geographic area. The Examiner maintains that these features were previously well-known as taught by Xiang. Xiang teaches the geographic polygon comprises a closed geographic area within the geographic region and a landcover prediction for the geographic polygon is indicative of an object class physically located within the closed geographic area (0054-55: the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers... model and field data is stored in model and field data repository 160. Model data comprises data models created for one or more fields. For example, a crop model may include a digitally constructed model of the development of a crop on the one or more fields). Bauer and Xiang are analogous art because they are from the same problem-solving area, making geographic predictions using hyperspectral imaging. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bauer and Xiang before him or her, to combine the teachings of Bauer and Xiang. The rationale for doing so would have been to provide geographic locations for a user. Therefore, it would have been obvious to combine Bauer and Xiang to obtain the invention as specified in the instant claim(s). Claims 6 and 19: Bauer, by itself, does not seem to completely teach the object class is one or more of a type of vegetation species or a type of geographic environment. The Examiner maintains that these features were previously well-known as taught by Xiang. Xiang teaches the object class is one or more of a type of vegetation species or a type of geographic environment (0054-55: the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system) and selecting specific CLUs that have been graphically shown on the map. In an alternative embodiment, the user 102 may specify identification data by accessing a map on the user device (served by the agricultural intelligence computer system 130) and drawing boundaries of the field over the map. Such CLU selection or map drawings represent geographic identifiers... model and field data is stored in model and field data repository 160. Model data comprises data models created for one or more fields. For example, a crop model may include a digitally constructed model of the development of a crop on the one or more fields). Bauer and Xiang are analogous art because they are from the same problem-solving area, making geographic predictions using hyperspectral imaging. Before the effective filing date of the claimed invention, it would have been obvious to one of ordinary skill in the art, having the teachings of Bauer and Xiang before him or her, to combine the teachings of Bauer and Xiang. The rationale for doing so would have been to provide geographic locations for a user. Therefore, it would have been obvious to combine Bauer and Xiang to obtain the invention as specified in the instant claim(s). Allowable Subject Matter Claims 2, 7-9, 11-13, 15 and 20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Note The Examiner cites particular columns, line numbers and/or paragraph numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2123. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure and is listed in the attached PTOL-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED-IBRAHIM ZUBERI whose telephone number is (571)270-7761. The examiner can normally be reached on M-Th 8-6 Fri: 7-12/OFF. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Steph Hong can be reached on (571) 272-4124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMED H ZUBERI/ Primary Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Nov 22, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585923
DESPARSIFIED CONVOLUTION FOR SPARSE ACTIVATIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12582478
SYSTEMS AND METHODS FOR INTEGRATING INTRAOPERATIVE IMAGE DATA WITH MINIMALLY INVASIVE MEDICAL TECHNIQUES
2y 5m to grant Granted Mar 24, 2026
Patent 12579650
IMPROVED SPINAL HARDWARE RENDERING
2y 5m to grant Granted Mar 17, 2026
Patent 12567496
METHOD AND APPARATUS FOR DISPLAYING AND ANALYSING MEDICAL SCAN IMAGES
2y 5m to grant Granted Mar 03, 2026
Patent 12547819
MODULAR SYSTEMS AND METHODS FOR SELECTIVELY ENABLING CLOUD-BASED ASSISTIVE TECHNOLOGIES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
98%
With Interview (+27.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month