Prosecution Insights
Last updated: April 19, 2026
Application No. 18/593,599

COMPUTER IMPLEMENTED METHOD AND SYSTEM FOR SEMANTIC SEGMENTATION OF WORKSITE

Non-Final OA §101§102§103
Filed
Mar 01, 2024
Examiner
ADU-JAMFI, WILLIAM NMN
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Caterpillar Inc.
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
25
Total Applications
across all art units

Statute-Specific Performance

§101
19.5%
-20.5% vs TC avg
§103
36.8%
-3.2% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
14.9%
-25.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The method of claim 1 is directed to a process, which is one of the statutory categories of invention, and passes Step 1: Statutory Category- MPEP § 2106.03. However, the following limitations of Claim 1 recite steps that can be performed in the human mind or with pen and paper, therefore failing Step 2A Prong One. These limitations constitute mental processes because they describe acts of observation, evaluation, and judgement that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Additionally, these limitations of claim 1 recite Mathematical Concepts, which are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations. The claim must recite (i.e. set forth or describe) a mathematical concept rather than include limitations that are merely based on math. receiving, by the computing system, a digital surface model (DSM) indicating elevation values corresponding to positions at the worksite; deriving, by the computing system, slope values associated with the positions based on the elevation values indicated by the DSM; one or more types of elements present at the worksite; and locations of the one or more types of elements at the worksite; Claim 1 fails Step 2A Prong Two because the additional elements beyond the judicial exception do not integrate the judicial exception into a practical application. The claim does not recite a specific asserted improvement in computer technology (MPEP § 2106.05(a)), and, instead, uses a processor, digital surface model (DSM), semantic segmentation model, and machine learning model to apply the abstract idea on a computer (MPEP § 2106.05(f)). Furthermore, the claim does not impose meaningful limits on the computer components such that the method is tied to a particular machine; the additional elements are described at a high level of generality and can be implemented on any generic computing system (MPEP § 2106.05(b)). Claim 1 also fails Step 2B, as these additional elements are well-understood, routine, and conventional (WURC), adding nothing significantly more than the abstract idea itself (MPEP § 2106.07(a)((III)); a processor is a generic computer element that is WURC (see MPEP § 2106.05(d)). The other additional elements beyond the judicial exception, including a digital surface model (DSM), semantic segmentation model, and machine learning model, are also WURC (see Introduction section of Peng et. al, “MSINet: Mining scale information from digital surface models for semantic segmentation of aerial images”). As claims 12 and 17 contain this identical ineligible subject matter, they are also rejected. Claims 6 and 7 recite steps that can be performed in the human mind or with pen and paper, therefore failing Step 2A Prong One. These limitations constitute mental processes because they describe acts of observation, evaluation, and judgement that can practically be performed in the human mind, or by a human using pen and paper as a physical aid. Additionally, these limitations of claim 1 recite Mathematical Concepts, which are defined as mathematical relationships, mathematical formulas or equations, or mathematical calculations. The claim must recite (i.e. set forth or describe) a mathematical concept rather than include limitations that are merely based on math. These claims, alongside claims 2-5 and 8-11, also fail Step 2A Prong Two and Step 2B because the additional elements beyond the judicial exception do not integrate the judicial exception into a practical application and are WURC (see claim 1 analysis above). As claims 13-16 and 18-20 contain this identical ineligible subject matter, they are also rejected. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5-7, and 9-20 are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Gonzalez et. al (US 2019/0026531 A1). Regarding Claim 1, Gonzalez teaches a computer-implemented method, comprising: Paragraph [0004]: “Embodiments of the present disclosure provide benefits and/or solve one or more of the foregoing or other problems in the art with systems and methods for automatically identifying stockpiles and determining stockpile volumes utilizing digital aerial images of a site captured by a UAV.” receiving, by a computing system comprising a processor, image data depicting a worksite; Paragraph [0026]: “In particular, in one or more embodiments, the aerial stockpile analysis system captures a plurality of digital aerial images of a site utilizing a UAV.” receiving, by the computing system, a digital surface model (DSM) indicating elevation values corresponding to positions at the worksite; Paragraph [0020]: “In particular, in one or more embodiments the aerial stockpile analysis system generates a three-dimensional model of a site utilizing digital aerial images captured by a UAV in flight and then utilizes the three-dimensional model of the site to identify potential stockpiles.” Paragraph [0052]: “Specifically, the three-dimensional site representation 206 is a point cloud comprising a plurality of three-dimensional points that reflect elevations of different portions of the site, including buildings, vegetation, and the stockpile 203.” deriving, by the computing system, slope values associated with the positions based on the elevation values indicated by the DSM; Paragraph [0057]: “For example, in one or more embodiments, the elevation filter 212 analyzes slope and/or elevation of the three-dimensional site representation 206 to determine whether the slope and/or elevation corresponds to a ground object or non-ground object.” Paragraph [0059]: “To illustrate, in one or more embodiments, the elevation filter 212 identifies potential stockpile locations by comparing slopes of potential stockpiles to a stockpile gradient threshold.” and generating, by the computing system, and using a semantic segmentation model, semantic segmentation data that identifies: one or more types of elements present at the worksite; Paragraph [0029]: “In particular, in one or more embodiments, the aerial stockpile analysis system trains the neural network to classify input models as stockpiles or other objects.” Paragraph [0047]: “The neural network analyzes the two-dimensional features and three-dimensional features to classify the potential stockpiles as stockpiles on the site 104 or non-stockpiles.” and locations of the one or more types of elements at the worksite, wherein the semantic segmentation model is a machine learning model configured to generate the semantic segmentation data based on the image data, the elevation values, and the slope values. Paragraph [0044]: “For instance, the aerial stockpile analysis system identifies locations of the stockpiles 110-116 by generating a two-dimensional representation and a three-dimensional representation of the site 104.” Paragraph [0070]: “For example, the two-dimensional features can include shape (e.g., an exterior shape of an overhead view of the potential stockpile), area, perimeter (e.g., length of the perimeter around the potential stockpile), width, length, circularity (e.g., a measure of how close a two-dimensional overhead view of the potential stockpile is to a circular shape), color (e.g., a single color, color gradation, color histogram, or color profile), or texture… Similarly, the three-dimensional features can include slope (e.g., slope along one or more surfaces of the three-dimensional stockpile representation 214), elevation profile (e.g., a profile shape generated from the three-dimensional stockpile representation 214),” Paragraph [0072]: “In relation to FIG. 2A, the neural network 220 comprises a deep neural network (e.g., a convolutional neural network) that includes a plurality of hidden layers that analyze features of two-dimensional inputs and/or three-dimensional inputs to determine a stockpile classification.” Regarding Claim 2, Gonzalez teaches the computer-implemented method of claim 1, wherein: the semantic segmentation data comprises an image mask associated with a type of element of the one or more types of elements, and Paragraph [0033]: “To illustrate, a location of a stockpile includes a position (e.g., pixels) of a stockpile within a digital aerial image…” Explanation: Supports that the segmentation representation is pixel-based and element-associated. pixels of the image mask correspond to particular locations of the type of element at the worksite. Paragraph [0033]: “For example, the term "location" includes coordinates, pixels, or some other indicator describing a position within a two-dimensional or three-dimensional space. To illustrate, a location of a stockpile includes a position (e.g., pixels) of a stockpile within a digital aerial image…” Explanation: Establishes that pixels = locations of the element Regarding Claim 3, Gonzalez teaches the computer-implemented method of claim 2, further comprising post-processing the semantic segmentation data by: identifying, by the computing system, outlier pixels that are disconnected, in the image mask, from other pixels that correspond to the type of element; Paragraph [0115]: “Specifically, the aerial stockpile analysis system can apply a snitch algorithm to the two point clouds to remove noise (e.g., points erroneously included in the potential stockpile or the stockpile boundary). Paragraph [0116]: “For example, in one or more embodiments, the aerial stockpile analysis system applies a snitching algorithm that compares the set of points 320 and the set of boundary points 324.” determining, by the computing system, and based on at least one of image processing techniques or telematics data indicative of operations performed at the worksite, that the outlier pixels are unlikely to represent the type of element associated with the image mask; Paragraph [0115]: “Specifically, the aerial stockpile analysis system can apply a snitch algorithm to the two point clouds to remove noise (e.g., points erroneously included in the potential stockpile or the stockpile boundary). Explanation: Explicitly supports the determination that points are not valid members of the element class. and deleting, by the computing system, the outlier pixels from the image mask. Paragraph [0115]: “Specifically, the aerial stockpile analysis system can apply a snitch algorithm to the two point clouds to remove noise (e.g., points erroneously included in the potential stockpile or the stockpile boundary). Paragraph [0117]: “The aerial stockpile analysis system can apply a snitching algorithm to remove the ground point from the set of points 320.” Regarding Claim 5, Gonzalez teaches the computer-implemented method of claim 1, further comprising post-processing the semantic segmentation data by: receiving, by the computing system, telematics data indicative of operations performed at the worksite; Paragraph [0025]: “Furthermore, the aerial stockpile analysis system can also track the volume of one or more stockpiles on a site over time… Accordingly, the aerial stockpile analysis system can automatically determine, monitor, and report stockpile volumes on a site over time.” Explanation: Operational activity over time = telematics operational data. and adding, by the computing system, and based on the telematics data, contextual data to the semantic segmentation data in association with individual instances of the one or more types of elements, Paragraph [0025]: “Furthermore, the aerial stockpile analysis system can also track the volume of one or more stockpiles on a site over time… Accordingly, the aerial stockpile analysis system can automatically determine, monitor, and report stockpile volumes on a site over time.” Explanation: Additional metadata (volume, temporal behavior) added to segmentation. wherein the contextual data indicates additional information associated with the individual instances based on the operations performed at the worksite. Paragraph [0025]: “Furthermore, the aerial stockpile analysis system can also track the volume of one or more stockpiles on a site over time… Accordingly, the aerial stockpile analysis system can automatically determine, monitor, and report stockpile volumes on a site over time.” Explanation: Explicitly additional information tied to activity at the site. Regarding Claim 6, Gonzalez teaches the computer-implemented method of claim 1, further comprising: providing, by the computing system, the semantic segmentation data to a worksite management system configured to at least one of: Paragraph [0043]: “As discussed above, clients often seek to know volumes of the stockpiles 110-116 (and the volume of the individual materials within the first stockpile 116a and the second stockpile 116b)…Construction managers often need to know the amount of fill available as a construction project progresses…” assign machines to perform operations at the worksite based on the one or more types of elements, and the locations of the one or more types of elements, identified by the semantic segmentation data, or track productivity at the worksite based on the one or more types of elements, and the locations of the one or more types of elements, identified by the semantic segmentation data. Paragraph [0043]: “Construction managers often need to know the amount of fill available as a construction project progresses to ensure that the site has sufficient supply and/or to arrange for sale and delivery of excess material.” Explanation: Management system decision-making and logistics functions based on the data. Regarding Claim 7, Gonzalez teaches the computer-implemented method of claim 1, wherein the semantic segmentation data is first semantic segmentation data representing a first state of the worksite at a first time, and the method further comprises: Paragraph [0025]: “Indeed, the aerial stockpile analysis system can utilize identified stockpiles from a first collection of digital aerial images of a site (captured during a first period of time, such as a first flight of a UAV) …” Paragraph [0140]: “In particular, the aerial stockpile analysis system can identify and calculate a volume of stockpiles in relation to digital aerial images of a site captured at first point in time (e.g., during a first flight).” receiving, by the computing system, second image data depicting the worksite at a second time; Paragraph [0140]: “The aerial stockpile analysis system can then obtain a plurality of digital aerial images of the site captured at a second point in time (e.g., during a second flight).” Paragraph [0141]: “Moreover, FIG. 6 illustrates the UAV 100 capturing a second plurality of digital images 602 of the site 104 during a second flight conducted at the second period of time.” receiving, by the computing system, a second DSM indicating second elevation values corresponding to the positions at the worksite; Paragraph [0142]: “In particular, the aerial stockpile analysis system generates a new two-dimensional representation of the site 104 and a new three-dimensional representation of the site 104 based on the second plurality of digital aerial images 602.” deriving, by the computing system, second slope values associated with the positions based on the second elevation values indicated by the second DSM; Paragraph [0142]: “In particular, the aerial stockpile analysis system generates a new two-dimensional representation of the site 104 and a new three-dimensional representation of the site 104 based on the second plurality of digital aerial images 602.” and generating, by the computing system, and using the semantic segmentation model, second semantic segmentation data that identifies: Paragraph [0140]: “The aerial stockpile analysis system can utilize the stockpiles identified at the first point in time to identify stockpiles and calculate volumes of the stockpiles at the second point in time.” the one or more types of elements present at the worksite at the second time; Paragraph [0140]: “The aerial stockpile analysis system can utilize the stockpiles identified at the first point in time to identify stockpiles and calculate volumes of the stockpiles at the second point in time.” and locations of the one or more types of elements at the worksite at the second time, Paragraph [0142]: “Moreover, the aerial stockpile analysis system can determine a boundary for each of the stockpiles 612-618, identify a ground reference surface for each of the stockpiles 612-618, and calculate the volumes 622-628 for each of the stockpiles…” Explanation: Includes locations (boundaries, correspondence) wherein differences between the first semantic segmentation data and the second semantic segmentation data are indicative of changes at the worksite between the first time and the second time. Paragraph [0140]: “Moreover, the aerial stockpile analysis system can then determine a change in volume between the first point in time and the second point in time.” Paragraph [0141]: “As shown, the site 104 has changed between the first period of time (shown in FIG. 1) and the second period of time (shown in FIG. 6).” Regarding Claim 9, Gonzalez teaches the computer-implemented method of claim 1, wherein the image data and the DSM is received from an aerial drone that uses one or more sensors to capture the image data and the DSM while the aerial drone is flying over the worksite. Paragraph [0026]: “In particular, in one or more embodiments, the aerial stockpile analysis system captures a plurality of digital aerial images of a site utilizing a UAV.” Paragraph [0027]: “For example, in one or more embodiments, the aerial stockpile analysis system utilizes a structure from motion algorithm and constrained bundle adjustment algorithm to generate a three-dimensional representation of the site utilizing a plurality of digital aerial images.” Explanation: UAV = aerial drone and images + 3D model both derived from UAV. Regarding Claim 10, Gonzalez teaches the computer-implemented method of claim 1, wherein: the semantic segmentation model is trained based on a training data set comprising: Paragraph [0186]: “Further, in one or more embodiments, the method 900 also includes training the neural network.” training image data depicting at least one example worksite; Paragraph [0036]: “As used herein, the term "site" refers to a location on Earth… The term site can include a construction site, a mining site, a property, a wilderness area, a disaster area, or other location.” Paragraph [0038]: “Furthermore, "training model" or "training input" refers to a model of a stockpile (e.g., a three-dimensional stockpile representation or a two-dimensional stockpile representation) or other object utilized to train a neural network.” Paragraph [0039]: “In particular, the term "stockpile" refers to a collection of one or more materials on a surface (e.g., on a ground surface) …” training DSMs indicative of example elevation values and example slope values associated with the at least one example worksite; Paragraph [0131]: “To illustrate, the aerial stockpile analysis system can train the neural network 404 by providing three-dimensional features that include slope (e.g., slope along one or more surfaces of a three-dimensional stockpile representation), elevation profile (e.g., a profile shape generated from a three-dimensional stockpile representation), and training segmentation data that identifies: Paragraph [0186]: “For example, the method 900 can also include providing the neural network with a training two-dimensional representation and a training three-dimensional representation corresponding to a ground-truth stockpile classification.” actual types of elements present at the at least one example worksite; Paragraph [0186]: “For example, the method 900 can also include providing the neural network with a training two-dimensional representation and a training three-dimensional representation corresponding to a ground-truth stockpile classification.” and actual locations of the actual types of elements at the at least one example worksite, and Paragraph [0186]: “For example, the method 900 can also include providing the neural network with a training two-dimensional representation and a training three-dimensional representation corresponding to a ground-truth stockpile classification.” the semantic segmentation model is trained to identify features, indicated by the training image data, the example elevation values, and the example slope values, that allows the semantic segmentation model to predict the actual types of elements and the actual locations of the actual types of elements indicated by the training segmentation data. Paragraph [0047]: “In particular, the aerial stockpile analysis system extracts two-dimensional features from the two-dimensional representations of potential stockpiles and three-dimensional features from the three-dimensional representations of potential stockpiles and analyzes the two-dimensional features and three-dimensional features utilizing a neural network.” Paragraph [0073]: “In this manner, the aerial stockpile analysis system trains the neural network 220 to identify significant features from two-dimensional and three-dimensional inputs for predicting a stockpile classification.” Regarding Claim 11, Gonzalez teaches the computer-implemented method of claim 10, wherein the semantic segmentation model uses instances of the features, identified via training of the semantic segmentation model, and indicated by the image data, the elevation values, and the slope values associated with the worksite, to generate the semantic segmentation data associated with the worksite. Paragraph [0047]: “In particular, the aerial stockpile analysis system extracts two-dimensional features from the two-dimensional representations of potential stockpiles and three-dimensional features from the three-dimensional representations of potential stockpiles and analyzes the two-dimensional features and three-dimensional features utilizing a neural network.” Paragraph [0073]: “In this manner, the aerial stockpile analysis system trains the neural network 220 to identify significant features from two-dimensional and three-dimensional inputs for predicting a stockpile classification.” Explanation: This explicitly supports trained features being used during inference. Regarding Claim 12, Gonzalez teaches all of the limitations of claim 1 above because claim 12 comprises a processor and memory configured to perform substantially the same steps as claim 1. Gonzalez teaches a processor and memory (Fig. 10 (shown below)). PNG media_image1.png 661 558 media_image1.png Greyscale Regarding Claim 13, Gonzalez teaches the computing system of claim 12, and additional limitations are met as in the consideration of claim 2 above. Regarding Claim 14, Gonzalez teaches the computing system of claim 12, and additional limitations are met as in the consideration of claim 5 above. Regarding Claim 15, Gonzalez teaches the computing system of claim 12, and additional limitations are met as in the consideration of claim 6 above. Regarding Claim 16, Gonzalez teaches the computing system of claim 12, and additional limitations are met as in the consideration of claim 10 above. Regarding Claim 17, Gonzalez teaches all of the limitations of claim 1 above because claim 17 comprises one or more non-transitory computer-readable media configured to perform substantially the same steps as claim 1. Gonzalez teaches a non-transitory computer-readable media, stating that “for example, the components 702-716 and their corresponding elements can comprise one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices” (paragraph 0164). Regarding Claim 18, Gonzalez teaches the one or more non-transitory computer-readable media of claim 17, and additional limitations are met as in the consideration of claim 2 above. Regarding Claim 19, Gonzalez teaches the one or more non-transitory computer-readable media of claim 17, and additional limitations are met as in the consideration of claim 5 above. Regarding Claim 20, Gonzalez teaches the one or more non-transitory computer-readable media of claim 17, and additional limitations are met as in the consideration of claim 10 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez et. al in view of Wang et. al (“Graph-theoretic post-processing of segmentation with application to dense biofilms”). Regarding Claim 4, Gonzalez teaches the computer-implemented method of claim 2, further comprising post-processing the semantic segmentation data by: identifying, by the computing system, outlier pixels that are disconnected, in the image mask, from other pixels that correspond to the type of element; Paragraph [0115]: “Specifically, the aerial stockpile analysis system can apply a snitch algorithm to the two point clouds to remove noise (e.g., points erroneously included in the potential stockpile or the stockpile boundary). Paragraph [0116]: “For example, in one or more embodiments, the aerial stockpile analysis system applies a snitching algorithm that compares the set of points 320 and the set of boundary points 324.” determining, by the computing system, and based on the at least one of the image processing techniques or the telematics data, that the image mask likely omits additional pixels corresponding to additional locations of the type of element at the worksite; Paragraph [0062]: “In this manner, the aerial stockpile analysis system can expand a three-dimensional potential stockpile representation to include all points reflecting the potential stockpile.”Explanation: Shows recognition that the current representation is incomplete. and adding, by the computing system, the additional pixels to connect the outlier pixels with the other pixels in the image mask. Paragraph [0109]: “As shown in FIG. 3, the aerial stockpile analysis system can continue adding points to the set of points 320 included in the potential stockpile 322 based on the stockpile gradient threshold.”Explanation: Explicit addition of points to connect and complete representation. Gonzalez does not teach the limitation, “determining, by the computing system, and based on at least one of image processing techniques or telematics data indicative of operations performed at the worksite, that the outlier pixels are likely to represent the type of element associated with the image mask.” However, Wang teaches that segmentation pipelines commonly require post-processing to correct over and under-segmentation, and discloses using image-processing criteria to determine when disconnected regions should be connected. Wang explains that “common practices for solving over-segmentation problems are region merging, grouping, and morphological closing” (B. The need for post-processing). Wang further teaches that objective image features are used to decide whether connecting pixels is appropriate, stating that “area and eccentricity information are used to determine if the morphological closing operation is needed to connect the pixels between neighboring segments” (B. The need for post-processing). This disclosure corresponds to determining, based on image-processing techniques, that disconnected pixels are likely part of the same object, and performing an operation (e.g., morphological closing) that adds pixels to connect neighboring segments. Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to incorporate Wang’s post-processing decision logic into the method of Gonzalez. Wang explains that such post-processing is necessary, stating that “the need for post-processing commonly exists in most workflows due to the difficulties in segmentation” (B. The need for post-processing). A person of ordinary skill in the art would have been motivated to incorporate Wang’s teachings in order to improve segmentation accuracy and address known deficiencies such as over and under-segmentation. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez et. al in view of Ronneberger et. al (“U-Net: Convolutional Networks for Biomedical Image Segmentation”). Regarding Claim 8, Gonzalez teaches the computer-implemented method of claim 1, wherein: the image data is an orthophotograph, and Paragraph [0035]: “In particular, the term two-dimensional representation includes an orthophoto, a map, or an overhead depiction of an object or a site.” a plurality of color channels associated with the orthophotograph; Paragraph [0070]: “For example, the two-dimensional features can include shape (e.g., an exterior shape of an overhead view of the potential stockpile), area, perimeter (e.g., length of the perimeter around the potential stockpile), width, length, circularity (e.g., a measure of how close a two-dimensional overhead view of the potential stockpile is to a circular shape), color (e.g., a single color, color gradation, color histogram, or color profile), or texture.” an elevation channel corresponding to the elevation values indicated by the DSM; Paragraph [0070]: “Similarly, the three-dimensional features can include slope (e.g., slope along one or more surfaces of the three-dimensional stockpile representation 214), elevation profile (e.g., a profile shape generated from the three-dimensional stockpile representation 214)…” and a slope channel corresponding to the slope values derived from the DSM. Paragraph [0070]: “Similarly, the three-dimensional features can include slope (e.g., slope along one or more surfaces of the three-dimensional stockpile representation 214), elevation profile (e.g., a profile shape generated from the three-dimensional stockpile representation 214)…” While Gonzalez teaches a convolutional neural network (CNN), he fails to teach that the specific architecture is a U-Net image segmentation model. However, Ronneberger a U-Net architecture that is designed for semantic (pixel-wise) segmentation with precise localization, stating that “the architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization” (Abstract). Ronneberger further explains that U-Net was developed because pixel-wise segmentation requires both contextual understanding and accurate spatial localization, stating that “in many visual tasks, especially in biomedical image processing, the desired output should include localization, i.e., a class label is supposed to be assigned to each pixel” (Introduction). Ronneberger also teaches that the U-Net architecture improves segmentation accuracy by combining low-level spatial features with high-level contextual features, stating that “high resolution features from the contracting path are combined with the upsampled output…a successive convolution layer can then learn to assemble a more precise output based on this information” (Introduction). Thus, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to substitute the generic CNN model of Gonzalez with the U-Net architecture taught by Ronneberger because Ronneberger explicitly teaches that U-Net is designed to improve pixel-level localization and segmentation accuracy, which are the same goals pursued by the semantic segmentation system of Gonzalez. The substitution of one known segmentation architecture (CNN-based model) with another well-known segmentation architecture (U-Net) represents a predictable use of prior art elements, according to their established functions. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Boudreaux et. al (US 20220092850 A1) discloses an AI-based geospatial system that uses CNNs (including U-Net architecture) to perform semantic segmentation and generate pixel-wise height maps from aerial imagery, which are then used to produce digital surface/elevation models and support multi-temporal analysis such as change detection. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM ADU-JAMFI whose telephone number is (571)272-9298. The examiner can normally be reached M-T 8:00-6:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM ADU-JAMFI/Examiner, Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Jan 20, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month