Prosecution Insights
Last updated: April 19, 2026
Application No. 18/797,221

MODEL AND SIMULATION TOOL FOR DIGITAL IMAGERY AND TERRAIN DATA

Non-Final OA §102
Filed
Aug 07, 2024
Examiner
LETT, THOMAS J
Art Unit
2611
Tech Center
2600 — Communications
Assignee
The United States Of America AS Represented By The Secretary Of The Navy
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
47%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
599 granted / 719 resolved
+21.3% vs TC avg
Minimal -36% lift
Without
With
+-36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
27.4%
-12.6% vs TC avg
§102
47.6%
+7.6% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 719 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mayer et al. (US RE44550 E). Regarding claim 1, Mayer et al. discloses a method for model and simulation (M&S) of digital imagery and terrain data (method and device for the pictorial representation of space-related data, for example, geographical data of the earth, col. 1, lines 15-20), comprising: receiving terrain and/or imagery files including geographic metadata (conduits 6 serve as a collecting network for transmitting data from the spatially distributed data sources 4, col. 6, lines 27-30); extracting the geographic metadata of the received terrain and/or imagery files (spatially distributed raised and/or stored data of the spatially distributed data sources can be provided at the points of their raising and/or storage with references, which indicate the storage points for data of adjacent areas or further data on the same area. If such links (hyperlinks) of the spatially distributed data exist between one another, the central system requires no knowledge of the exact spatial storage points for all data of the object, col. 4, line 65 – col. 5, line 5); and modifying the extracted geographic metadata to represent changes in the terrain or to add assets to the received terrain and/or imagery files (the data are converted to a new co-ordinate system with a new co-ordinate origin, col. 6, lines 18-19). Regarding claim 2, Mayer et al. discloses the method of claim 1, wherein modifying the extracted metadata includes rewriting the metadata of the received terrain and/or imagery files to create altered terrain and/or imagery files (after an alteration in the location and of the angle of view of the observer, the data are converted to a new co-ordinate system with a new co-ordinate origin. During a continuous movement of the observer therefore the co-ordinates of the data are constantly subjected to co-ordinate transformation, col. 5, lines 16-22). Regarding claim 3, Mayer et al. discloses the method of claim 1, wherein the assets include buildings (CAD-models of buildings were available, which were inserted into the view, col. 7, lines 45-50), geographic features, or trees. Regarding claim 4, Mayer et al. discloses the method of claim 1, wherein the terrain and/or imagery files are geoTIFF files (space-related data, for example topography, actual cloud distribution, configurations of roads, rivers or frontiers, satellite images, actual temperatures, historical views, CAD-models, actual camera shots, are called up, stored or generated in a spatially distributed fashion, see Abstract and figure 8. Examiner articulates that the space-related data reads on a geotiff file.). Regarding claim 5, Mayer et al. discloses the method of claim 1, further comprising: receiving one or more object files (CAD models of buildings, or animated objects, col. 5, lines 33-34); and parsing the one or more object files to extract boundary or elevation characteristics in the one or more object files (the field of view is divided into sections and an investigation is undertaken for each individual section, col. 2, lines 29-33); wherein modifying the extracted geographic metadata to represent changes in the terrain or to add assets to the received terrain and/or imagery files further includes modifying the extracted geographic metadata based on the extracted boundary or elevation characteristics (field of view to be shown are called up from the spatially distributed data sources only in the accuracy necessary for representation of the field of view with the desired image resolution, i.e. for example with high spatial resolution for close areas of the field of view or in low spatial resolution in a view to the horizon of a spherical object, col. 3, lines 7-14). Regarding claim 6, Mayer et al. discloses the method of claim 1, wherein modifying the extracted geographic metadata to represent changes in the terrain or add assets to the received terrain and/or imagery files is selectable by a user (spatially distributed data call-up and storage systems may be expanded or updated at will, without the central store and evaluation units taking knowledge of the alteration during each such alteration, col. 9, lines 37-40). Claims 7-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Engstrom (US 20110191023 A1). Regarding claim 7, Engstrom discloses a method for modifying terrain imagery files comprising: receiving a terrain imagery file having embedded georeferencing metadata (golf course is mapped from the air with a plane to generate stereo (e.g., 80% to 60% overlap) imagery, i.e., image data. This stereo imagery is used to create a large, geo-referenced imagery map of the golf course, para. 0028, step 120); extracting the georeferencing metadata from the terrain imagery file (From all of this acquired data, a 3-D geospatial model is built (georeferenced with X, Y, Z values of each feature) and then integrated into 3-D gaming and web mapping environments, para. 0010); assigning latitude and longitude ranges to the terrain imagery file based on the extracted georeferencing metadata (the step of establishing a global navigation satellite system (GNSS) real time kinematic (RTK) ground control of the land area, para. 0012); selecting boundary latitude, longitude, and altitude (LLA) points within the assigned latitude and longitude ranges for alteration using at least one object file (GNSS allows small electronic receivers to determine their location (longitude, latitude, and altitude) to within several meters, para. 0006); computing one or more intersections to determine points on a map that are within the selected boundary LLA points (photogrammetric tie points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays (i.e., triangulation) that determines the three-dimensional location of the point, para. 0010); assigning at least one of new elevations or new features to the terrain imagery file based on the determined points on the map (geospatial terrain data including elevation measurements, and (3) object data pertaining to trees, bushes, water hazards, buildings, and any other objects present (collectively referred to herein as "3-D golf course data"), para. 0025); and generating a new terrain imagery file including the assigned at least one of new elevations or new features (geo-processed to predetermined mapping and geodetic standards, similar and abiding to the based mapping specifications as outlined by the American Society for Photogrammetry and Remote Sensing (ASPRS), para. 0028). Regarding claim 8, Engstrom discloses the method of claim 7, wherein the new features include buildings, geographic features, or trees (paras. 0010, 0025, 0028). Regarding claim 9, Engstrom discloses the method of claim 7, wherein the terrain imagery file is a geoTIFF file (providing three-dimensional, topographic data (x,y,z), para. 0010). Regarding claim 10, Engstrom discloses the method of claim 7, further comprising: receiving the one or more object files (step 140); and parsing the one or more object files for assigning the latitude and longitude ranges to the terrain imagery file (geo-processing the image data, terrain data, and object data to produce geospatial, three-dimensional mapping data of the surface of the land area, para. 0012). Regarding claim 11, Engstrom discloses the method of claim 7, wherein generating a new terrain imagery file including the assigned at least one of new elevations or new features is selectable by a user (a "true" golf course navigation device must calculate for the change in hypotenuse or change in elevation, topography or slope with respect to a calculated distance between point A and point B, para. 0009). Regarding claim 12, Engstrom discloses the method of claim 7, wherein computing the one or more intersections includes determining intersection points between an object obtained from the at least one object file and a reference grid (triangulation, paras. 0010, 0016). Regarding claim 13, Engstrom discloses a non-transitory computer-readable medium storing computer executable code, wherein the code when executed by at least one processor causes the at least one processor to: retrieve at least one GeoTIFF file (golf course is mapped from the air with a plane to generate stereo (e.g., 80% to 60% overlap) imagery, i.e., image data. This stereo imagery is used to create a large, geo-referenced imagery map of the golf course, para. 0028, step 120); extract geographic metadata contained within the at least one GeoTIFF file (From all of this acquired data, a 3-D geospatial model is built (georeferenced with X, Y, Z values of each feature) and then integrated into 3-D gaming and web mapping environments, para. 0010); modify the extracted geographic metadata to represent changes in the terrain or adding assets to the at least one GeoTIFF file (device may have network connectivity to update the locally stored 3-D golf course data, para. 0026); and rewrite the at least one GeoTIFF file with the modified extracted geographic data (device may have network connectivity to update the locally stored 3-D golf course data, para. 0026). Regarding claim 14, discloses an apparatus for modifying terrain imagery files comprising: at least one processor (paras. 0015, 0029); and at least one memory (data storage, para. 0029)in communication with the at least one processor containing computer readable instructions; wherein the at least one processor running the computer readable instructions is configured to: retrieve at least one GeoTIFF file (golf course is mapped from the air with a plane to generate stereo (e.g., 80% to 60% overlap) imagery, i.e., image data. This stereo imagery is used to create a large, geo-referenced imagery map of the golf course, para. 0028, step 120); extract geographic metadata contained within the at least one GeoTIFF file (From all of this acquired data, a 3-D geospatial model is built (georeferenced with X, Y, Z values of each feature) and then integrated into 3-D gaming and web mapping environments, para. 0010); modify the extracted geographic metadata to represent changes in the terrain or adding assets to the at least one GeoTIFF file (device may have network connectivity to update the locally stored 3-D golf course data, para. 0026); and rewrite the at least one GeoTIFF file with the modified extracted geographic data (device may have network connectivity to update the locally stored 3-D golf course data, para. 0026). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J LETT/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Aug 07, 2024
Application Filed
Jan 24, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602714
LIGHTING AND INTERNET OF THINGS DESIGN USING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12570401
Robot and Unmanned Aerial Vehicle (UAV) Systems for Cell Sites and Towers
2y 5m to grant Granted Mar 10, 2026
Patent 12567217
SMART CONTENT RENDERING ON AUGMENTED REALITY SYSTEMS, METHODS, AND DEVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12561867
SYSTEMS AND METHODS FOR AUTOMATICALLY ADDING TEXT CONTENT TO GENERATED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12555276
Image Generation Method and Apparatus
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
47%
With Interview (-36.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 719 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month