Prosecution Insights
Last updated: April 19, 2026
Application No. 18/232,107

SYSTEM AND METHOD FOR GENERATING FEATURE DATA

Non-Final OA §102§103
Filed
Aug 09, 2023
Examiner
DHOOGE, DEVIN J
Art Unit
2677
Tech Center
2600 — Communications
Assignee
Here Global B V
OA Round
2 (Non-Final)
70%
Grant Probability
Favorable
2-3
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
50 granted / 71 resolved
+8.4% vs TC avg
Strong +43% interview lift
Without
With
+42.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
48 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
49.4%
+9.4% vs TC avg
§102
35.8%
-4.2% vs TC avg
§112
5.7%
-34.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 71 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This communication is filed in response to the office action filed on 12/23/2025. Claims 14 is amended. Claims 1-20 are pending. Response to Arguments Applicant’s arguments filed on 12/23/2025 on page 9, under REMARKS with respect to claim objections to claim 14 has been fully considered and is persuasive. The objection to the claim has been withdrawn. Applicant’s arguments filed on 12/23/2025 on pages 9-14, under REMARKS with respect to 35 U.S.C. 102 and 103 claim rejections to claims 1-20 have been fully considered and are persuasive. The rejections to the claims have been withdrawn. However, upon further consideration, a new ground of rejection is made in view of US 11,823,389 B2. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5, 7, 13-15, 17, and 19-20 are rejected under 35 § U.S.C. 102(a)(1) as being anticipated by US 11,823,389 B2 to CHAWLA (hereinafter “CHAWLA”). As per claim 1, CHAWLA discloses a system for generating feature data (a computing system and related method adapted for generating a road network mapping/graph of an aerial view image using road feature data/image feature data; abstract; figs 1-10; column 5, line 39 – column 6, line 65), the system comprising: at least one processor (the computing system adapted to include at least one processing component for executing programs and instructions; column 3, lines 48-55; column 15, lines 34-46); and at least one memory including computer program code for one or more programs (the computing system further including a computing memory component to store data and programs to be executed by the processor; column 3, lines 48-55; column 15, lines 34-46), the at least one memory and the computer program code configured to, with the at least one processor, cause the system to perform at least the following: determine a set of break points associated with at least one road feature based on mask image data (the system is adapted to determine junction/intersection points (these points act substantially as claimed and defined break points see specification [0007] defines break points as break points associated with at least one road feature based on mask image data), wherein junction points act as a point where three or more edges of roads in the aerial view input images meet and these junction points are analyzed to extract image feature including road features such as edges, intersection, junctions, and corners along with road names stored in a road feature database; column 5, line 39 – column 6, line 65; column 7, line 1- column 8, line 60; column 12, lines 4-30), wherein the mask image data is associated with overhead image data comprising the at least one road feature (the input overhead/aerial images are applied a binary mask which sorts pixels as “road pixel” yes or no and uses road feature detection in order to determine if a pixels a road pixel and it connects the pixels to form the road network graph and follows the graph to junction of three or more path edges; column 6, lines 1-65; column 7, lines 1-57; column 11, lines 5-14); generate a set of cropped feature maps, based on i) processing of a global feature map associated with the overhead image data obtained from a global feature segmentation model, and ii) the set of break points (the aerial images are broken into regions (cropped) based on pixel classifications and road network interconnection by classifying pixels in an aerial image as "road" or "nonroad" is a well-studied problem, with solutions generally using probabilistic models of road images based on assumptions about local road-like features, such as road geometry and color intensity, and draw inferences with MAP estimation and can connect the map region to have an overall road network graph produced; column 5, lines 4-28; column 6, lines 1-65; column 8, lines 11-64); generate the feature data associated with the at least one road feature based on i) application of a local feature detection model on the generated set of cropped feature maps and ii) the set of break points (using known in the art geometric probabilistic models features of road images are extracted including local road-like features, such as road geometry and color intensity, and draw inferences with MAP estimation, using the aerial images which road pixels are traced to follow the path of road pixels and road features including junction points acting as the break points where three paths meet are extracted from the path of road pixels of the aerial image of the region being analyzed; column 6, lines 1-65; column 7, lines 1-57; column 11, lines 15-65); and store the generated feature data in a geographic database (and the system is adapted to store models and results/data including road network graphs/maps in a database component communicatively connected to the computing system; column 6, lines 1-65; column 15, lines 34-51). As per claim 2, CHAWLA discloses the system of claim 1, wherein the system is further caused to generate the mask image data based on application of the global feature segmentation model on the overhead image data (the model produces a masked image data (the image data is overhead image data see fig 1) of the segmentation output is first thresholded to obtain a binary mask, and apply morphological thinning to produce a mask where roads are represented as one-pixel-wide center lines; figs 1-3; column 1, lines 55-64; column 5, line 39 – column 6, line 36; column 7, lines 2-34). As per claim 3, CHAWLA discloses the system of claim 1, wherein the overhead image data comprises at least one of: satellite imagery or aerial imagery (the image data input into the system is aerial images providing an overhead viewpoint of road networks; figs 1-3; column 1, lines 55-64; column 5, line 39 – column 6, line 36). As per claim 5, CHAWLA discloses the system of claim 4, wherein the system is further caused to: identify one or more break points of the set of break points having a relative distance less than or equal to a first threshold distance (the threshold used for the binary mask of the segmentation model is adapted to act as a distance threshold by allowing the system to produce the path that is yielded as the highest correct shortest path from junction point to junction point; column 7, lines 2-34; column 9, lines 14-30; column 12, lines 4-56); and combine the identified one or more break points of the determined set of break points (intersection/junction (break points) points are a vertex where the road meets at three or more edges during the segmentation approach these vertex’s are used with a variable threshold to produce a binary mask and the threshold is used to produce the report results for the threshold that yields the highest correct shortest paths, where Long, Short, and No Path specify different reasons for an inferred shortest path being incorrect and the system is further adapted to perform feature combination during junction point encoding process; column 7, lines 2-34; column 15, lines 1-33). As per claim 7, CHAWLA discloses the system of claim 1, wherein, to generate the feature data, the system is further caused to determine a plurality of vertices until one or more stop conditions are met (the road feature extraction system and method is adapted to determine a plurality of vertices as feature junction points based on stop actions; column 6, lines 19-67; column 8, lines 35-63; column 12, lines 38-56), the determination comprises: selecting a first break point of the set of break points as a current vertex (inputting the aerial image and extracting road features to find intersection points of the features acting as break points where the feature either ends or acts as a junction point for the current vertex; column 6, line 60-column 7, line 23; column 11, lines 14-45); applying the local feature detection model on the current vertex and a corresponding cropped feature map of the set of cropped feature maps (the feature detection model is applied to the aerial images to produce of a feature map of a desired region and the vertex points are found based on stop actions and threshold valuers relating to junction and intersection points of the features being tracked and the bounding box acts as a cropping tool to generate a cropped feature map; column 6, line 60-column 7, line 44; column 8, lines 11-39); and receiving, as a predicted output of the local feature detection model, a subsequent vertex of the current vertex included in the plurality of vertices and a confidence score associated with the subsequent vertex (receiving as an output of the model a segmentation output and is further threshold-ed to obtain a binary mask, and morphological thinning is uses to produce a mask where roads are one pixel wide and is interpreted as a graph of set pixels as vertices and edges which connect adjacent set pixels, the system also applies a useability score which acts as a confidence score because usability is defined with two goals (a to give a score that is representative of the inferred map's practical usability, and (b to be interpretable, which both translate to confidence in the model because to be confident in the model one would desire to use the model and for the results to be easily interpreted and accurate; column 7, lines 2-56; column 11, line 46-column 12, line 51). As per claim 13, CHAWLA discloses the system of claim 1, wherein the local feature detection model corresponds to a deep learning graph-based model (the computing system comprises a deep learning neural network CNN adapted to generated a road network graph of overhead aerial images and is implemented as a DeepRoadMapper model or the improved Road Tracer model; column 13, lines 29-66). As per claim 14, CHAWLA discloses a method for generating feature data (a computing system and related method adapted for generating a road network mapping/graph of an aerial view image using road feature data/image feature data; abstract; figs 1-10; column 5, line 39 – column 6, line 65), comprising: determining a set of break points associated with at least one road feature based on mask image data (the system is adapted to determine junction/intersection points (these points act substantially as claimed and defined break points see specification [0007] defines break points as break points associated with at least one road feature based on mask image data), wherein junction points act as a point where three or more edges of roads in the aerial view input images meet and these junction points are analyzed to extract image feature including road features such as edges, intersection, junctions, and corners along with road names stored in a road feature database; column 5, line 39 – column 6, line 65; column 7, line 1- column 8, line 60; column 12, lines 4-30), wherein the mask image data is associated with overhead image data comprising the at least one road feature (the input overhead/aerial images are applied a binary mask which sorts pixels as “road pixel” yes or no and uses road feature detection in order to determine if a pixels a road pixel and it connects the pixels to form the road network graph and follows the graph to junction of three or more path edges; column 6, lines 1-65; column 7, lines 1-57; column 11, lines 5-14); generating a set of cropped feature maps, based on i) processing of a global feature map associated with the overhead image data obtained from a global feature segmentation model and ii) the set of break points (the aerial images which will comprise the road network graph once output as a final result are broken into regions (cropped) based on pixel classifications and road network interconnection by classifying pixels in an aerial image as "road" or "nonroad" is a well-studied problem, with solutions generally using probabilistic models of road images based on assumptions about local road-like features, such as road geometry and color intensity, and draw inferences with MAP estimation and can connect the map region to have an overall road network graph produced; column 5, lines 4-28; column 6, lines 1-65; column 8, lines 11-64); generating the feature data associated with the at least one road feature based on i) application of a local feature detection model on the generated set of cropped feature maps and ii) the set of break points (using known in the art geometric probabilistic models features of road images are extracted including local road-like features, such as road geometry and color intensity, and draw inferences with MAP estimation, using the aerial images which road pixels are traced to follow the path of road pixels and road features including junction points acting as the break points where three paths meet are extracted from the path of road pixels of the aerial image of the region being analyzed; column 6, lines 1-65; column 7, lines 1-57; column 11, lines 15-65); and storing the generated feature data in a geographic database (and the system is adapted to store models and results/data including road network graphs/maps in a database component communicatively connected to the computing system; column 6, lines 1-65; column 15, lines 34-51). As per claim 15, CHAWLA discloses the method of claim 14, further comprising generating the mask image data based on application of the global feature segmentation model on the overhead image data (the model produces a masked image data (the image data is overhead image data see fig 1) of the segmentation output is first thresholded to obtain a binary mask, and apply morphological thinning to produce a mask where roads are represented as one-pixel-wide center lines; figs 1-3; column 1, lines 55-64; column 5, line 39 – column 6, line 36; column 7, lines 2-34). As per claim 17, CHAWLA discloses the method of claim 14, further comprising determining a plurality of vertices until one or more stop conditions are met to generate the feature data (the road feature extraction system and method is adapted to determine a plurality of vertices as feature junction points based on stop actions; column 6, lines 19-67; column 8, lines 35-63; column 12, lines 38-56), wherein the determining comprises: selecting a first break point of the set of break points as a current vertex (inputting the aerial image and extracting road features to find intersection/junction points of the features acting as break points where the feature either ends or acts as a junction point for the current vertex including 3 or more edges; column 6, line 60-column 7, line 23; column 11, lines 14-45; column 12, lines 4-30); applying the local feature detection model on the current vertex and a corresponding cropped feature map of the set of cropped feature maps (the feature detection model is applied to the aerial images to produce of a feature map of a desired region and the vertex points are found based on stop actions and threshold valuers relating to junction and intersection points of the features being tracked and the bounding box acts as a cropping tool to generate a cropped feature map; column 6, line 60-column 7, line 44; column 8, lines 11-39); and receiving, as a predicted output of the local feature detection model, a subsequent vertex of the current vertex included in the plurality of vertices and a confidence score associated with the subsequent vertex (receiving as an output of the model a segmentation output and is further threshold-ed to obtain a binary mask, and morphological thinning is uses to produce a mask where roads are one pixel wide and is interpreted as a graph of set pixels as vertices and edges which connect adjacent set pixels, the system also applies a useability score which acts as a confidence score because usability is defined with two goals (a to give a score that is representative of the inferred map's practical usability, and (b to be interpretable, which both translate to confidence in the model because to be confident in the model one would desire to use the model and for the results to be easily interpreted and accurate; column 7, lines 2-56; column 11, line 46-column 12, line 51). As per claim 19, CHAWLA discloses a non-transitory computer-readable storage medium carrying one or more sequences of one or more instructions which (a computing system and related method adapted for generating a road network mapping/graph of an aerial view image using road feature data/image feature data the computing system further including a computing memory component to store data and programs to be executed by the processor; abstract; figs 1-10; column 3, lines 48-55; column 5, line 39 – column 6, line 65; column 15, lines 34-46), when executed by one or more processors, cause an apparatus to perform operations comprising (the computing system adapted to include at least one processing component for executing programs and instructions; column 3, lines 48-55; column 15, lines 34-46): determining a set of break points associated with at least one road feature based on mask image data (the system is adapted to determine junction/intersection points (these points act substantially as claimed and defined break points see specification [0007] defines break points as break points associated with at least one road feature based on mask image data), wherein junction points act as a point where three or more edges of roads in the aerial view input images meet and these junction points are analyzed to extract image feature including road features such as edges, intersection, junctions, and corners along with road names stored in a road feature database; column 5, line 39 – column 6, line 65; column 7, line 1- column 8, line 60; column 12, lines 4-30), wherein the mask image data is associated with overhead image data comprising the at least one road feature (the input overhead/aerial images are applied a binary mask which sorts pixels as “road pixel” yes or no and uses road feature detection in order to determine if a pixels a road pixel and it connects the pixels to form the road network graph and follows the graph to junction of three or more path edges; column 6, lines 1-65; column 7, lines 1-57; column 11, lines 5-14); generating a set of cropped feature maps, based on i) processing of a global feature map associated with the overhead image data obtained from a global feature segmentation model and ii) the set of break points (the aerial images which will comprise the road network graph once output as a final result are broken into regions (cropped) based on pixel classifications and road network interconnection by classifying pixels in an aerial image as "road" or "nonroad" is a well-studied problem, with solutions generally using probabilistic models of road images based on assumptions about local road-like features, such as road geometry and color intensity, and draw inferences with MAP estimation and can connect the map region to have an overall road network graph produced; column 5, lines 4-28; column 6, lines 1-65; column 8, lines 11-64); generating feature data associated with the at least one road feature based on i) application of a local feature detection model on the generated set of cropped feature maps and ii) the set of break points (using known in the art geometric probabilistic models features of road images are extracted including local road-like features, such as road geometry and color intensity, and draw inferences with MAP estimation, using the aerial images which road pixels are traced to follow the path of road pixels and road features including junction points acting as the break points where three paths meet are extracted from the path of road pixels of the aerial image of the region being analyzed; column 6, lines 1-65; column 7, lines 1-57; column 11, lines 15-65); and storing the generated feature data in a geographic database (and the system is adapted to store models and results/data including road network graphs/maps in a database component communicatively connected to the computing system; column 6, lines 1-65; column 15, lines 34-51). As per claim 20, CHAWLA discloses the non-transitory computer-readable storage medium of claim 19, wherein the operations further comprise determining a plurality of vertices until one or more stop conditions are met to generate the feature data (the road feature extraction system and method is adapted to determine a plurality of vertices as feature junction points based on stop actions; column 6, lines 19-67; column 8, lines 35-63; column 12, lines 38-56), wherein the determining comprises: selecting a first break point of the set of break points as a current vertex (inputting the aerial image and extracting road features to find intersection points of the features acting as break points where the feature either ends or acts as a junction point for the current vertex; column 6, line 60-column 7, line 23; column 11, lines 14-45); applying the local feature detection model on the current vertex and a corresponding cropped feature map of the set of cropped feature maps (the feature detection model is applied to the aerial images to produce of a feature map of a desired region and the vertex points are found based on stop actions and threshold valuers relating to junction and intersection points of the features being tracked and the bounding box acts as a cropping tool to generate a cropped feature map; column 6, line 60-column 7, line 44; column 8, lines 11-39); and receiving, as a predicted output of the local feature detection model, a subsequent vertex of the current vertex included in the plurality of vertices and a confidence score associated with the subsequent vertex (receiving as an output of the model a segmentation output and is further threshold-ed to obtain a binary mask, and morphological thinning is uses to produce a mask where roads are one pixel wide and is interpreted as a graph of set pixels as vertices and edges which connect adjacent set pixels, the system also applies a useability score which acts as a confidence score because usability is defined with two goals (a to give a score that is representative of the inferred map's practical usability, and (b to be interpretable, which both translate to confidence in the model because to be confident in the model one would desire to use the model and for the results to be easily interpreted and accurate; column 7, lines 2-56; column 11, line 46-column 12, line 51). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 4, and 16 are rejected under 35 § U.S.C. 103 as being obvious over US 11,823,389 B2 to CHAWLA (hereinafter “CHAWLA”) in view of US 2021/0150227 A1 to HU et al. (hereinafter “HU”). As per claim 4, CHAWLA discloses the system of claim 1. CHAWLA fails to disclose wherein the system is further caused to: apply a filter on the mask image data; and determine the set of break points based on the application of the filter on the mask image data, wherein the set of break points comprises one or more terminal points and one or more junction points. HU discloses wherein the system is further caused to: apply a filter on the mask image data (the image data is filtered using a color array filter to arrive at only RGB images and is further manually filtered to remove bad road predictions; paragraphs [0055-0056]); and determine the set of break points based on the application of the filter on the mask image data, wherein the set of break points comprises one or more terminal points and one or more junction points (the various masks applied to extract various features include an intersection over union mask in order to match the highest overlapped features of the various masks based on mask scores and include a final map showing intersections of various features acting as break points of the map by binarizing the final map Mf550 to including all masks having a confidence level (mask score) that meets one or more criteria such as a threshold value and including one or more of certain extracted features in the final map overlay and would include points which act as terminal points of various features and points that act as junction points to continue on with a different feature; paragraphs [0050-0054]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have set of break points comprises one or more terminal points and one or more junction points of HU reference. The Suggestion/motivation for doing so would have been to provide the ability to produce the final map Mf 550 represents an individual instance of a feature in the pair of stereo images that were used as inputs for the process and would therefore allow the user to observed the final map and point out the intersection points or “break points” of the feature maps as suggested by HU at paragraph [0054]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HU with CHAWLA to obtain the invention as specified in claim 4. As per claim 16, CHAWLA discloses the method of claim 14. CHAWLA fails to disclose further comprising: applying a filter on the mask image data; and determining the set of break points based on the application of the filter on the mask image data, wherein the set of break points comprises one or more terminal points and one or more junction points. HU discloses further comprising: applying a filter on the mask image data (the image data is filtered using a color array filter to arrive at only RGB images and is further manually filtered to remove bad road predictions; paragraphs [0055-0056]); and determining the set of break points based on the application of the filter on the mask image data, wherein the set of break points comprises one or more terminal points and one or more junction points (the various masks applied to extract various features include an intersection over union mask in order to match the highest overlapped features of the various masks based on mask scores and include a final map showing intersections of various features acting as break points of the map by binarizing the final map Mf550 to including all masks having a confidence level (mask score) that meets one or more criteria such as a threshold value and including one or more of certain extracted features in the final map overlay and would include points which act as terminal points of various features and points that act as junction points to continue on with a different feature; paragraphs [0050-0054]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have wherein the set of break points comprises one or more terminal points and one or more junction points of HU reference. The Suggestion/motivation for doing so would have been to provide the ability to produce the final map Mf 550 represents an individual instance of a feature in the pair of stereo images that were used as inputs for the process and would therefore allow the user to observed the final map and point out the intersection points or “break points” of the feature maps as suggested by HU at paragraph [0054]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HU with CHAWLA to obtain the invention as specified in claim 16. Claims 6, and 18 are rejected under 35 § U.S.C. 103 as being obvious over US 11,823,389 B2 to CHAWLA (hereinafter “CHAWLA”) in view of US 2020/0072610 A1 to HOFMANN et al (hereinafter “HOFMANN”). As per claim 6, CHAWLA discloses the system of claim 1. CHAWLA fails to disclose wherein the processing of the global feature map comprises: up sampling the global feature map to match a size of the overhead image data in a decoder arm of the global feature segmentation model; determining a region associated with each of the set of break points in the up sampled global feature map; and generating the set of cropped feature maps by extracting the corresponding region for each of the set of break points from the up sampled global feature map. HOFMANN discloses wherein the processing of the global feature map comprises: up sampling the global feature map to match a size of the overhead image data in a decoder arm of the global feature segmentation model (the feature extracting system via the models decodes the aerial image input into the system by using smaller tile sizes for the mask and aerial images, the feature extraction system will need less memory to decode the image with that being said the system is adapted to zoom in (up sample) or zoom out (down sample) the various tiles related to the size of the features being extracted; paragraphs [0070], [0075]); determining a region associated with each of the set of break points in the up sampled global feature map (further a feature identifier module searches the features associated with each retrieved map tile for a field that identifies the particular feature type, and stores data identifying each feature having a field that matches the feature type and the tile/tiles, if the feature spans multiple tiles map feature identifier 310 may access tiled map data at a particular zoom level or multiple zoom levels and the zoom level are selected based on a typical size of the feature being modeled and the layers of the feature map would include multiple geographic features including but not limited to roads, rivers, and various structures and would include intersection points of the features which comprise a break point per the google definition; paragraphs [0038], [0070]); and generating the set of cropped feature maps by extracting the corresponding region for each of the set of break points from the up sampled global feature map (and generating the tiles of feature maps as seen in figure 8 which are zoomed in to see the features being extracted by that particular model in that particular layer and would be cropped in tiles, and the insides of the tile boundaries would represent the region; fig 8; paragraph [0070]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have generating the set of cropped feature maps by extracting the corresponding region for each of the set of break points from the up sampled global feature map of HOFMANN reference. The Suggestion/motivation for doing so would have been to provide high resolution clear map tiles for the cropped road feature map at a plurality of zoom levels to allow the user to zoom in on the map at various levels as suggested by paragraph [0036] of HOFMANN. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HOFMANN with CHAWLA to obtain the invention as specified in claim 16. As per claim 18, CHAWLA discloses the method of claim 14. CHAWLA fails to disclose wherein the processing of the global feature map comprises: up sampling the global feature map to match a size of the overhead image data in a decoder arm of the global feature segmentation model; determining a region associated with each of the set of break points in the up sampled global feature map; and generating the set of cropped feature maps by extracting the corresponding region for each of the set of break points from the up sampled global feature map. HOFMANN discloses wherein the processing of the global feature map comprises: up sampling the global feature map to match a size of the overhead image data in a decoder arm of the global feature segmentation model (the feature extracting system via the models performs a method to decode the aerial image input into the system by using smaller tile sizes for the mask and aerial images, the feature extraction system will need less memory to decode the image with that being said the system is adapted to zoom in (up sample) or zoom out (down sample) the various tiles related to the size of the features being extracted; paragraphs [0070], [0075]); determining a region associated with each of the set of break points in the up sampled global feature map (further a feature identifier module searches the features associated with each retrieved map tile for a field that identifies the particular feature type, and stores data identifying each feature having a field that matches the feature type and the tile/tiles, if the feature spans multiple tiles map feature identifier 310 may access tiled map data at a particular zoom level or multiple zoom levels and the zoom level are selected based on a typical size of the feature being modeled and the layers of the feature map would include multiple geographic features including but not limited to roads, rivers, and various structures and would include intersection points of the features which comprise a break point per the google definition; paragraphs [0038], [0070]); and generating the set of cropped feature maps by extracting the corresponding region for each of the set of break points from the up sampled global feature map (and generating the tiles of feature maps as seen in figure 8 which are zoomed in to see the features being extracted by that particular model in that particular layer and would be cropped in tiles, and the insides of the tile boundaries would represent the region; fig 8; paragraph [0070]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have generating the set of cropped feature maps by extracting the corresponding region for each of the set of break points from the up sampled global feature map of HOFMANN reference. The Suggestion/motivation for doing so would have been to provide high resolution clear map tiles for the cropped road feature map at a plurality of zoom levels to allow the user to zoom in on the map at various levels as suggested by paragraph [0036] of HOFMANN. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine HOFMANN with CHAWLA to obtain the invention as specified in claim 18. Claims 8-12 are rejected under 35 § U.S.C. 103 as being obvious over US 11,823,389 B2 to CHAWLA (hereinafter “CHAWLA”) in further view of US 2025/0166333 A1 to YU et al. (hereinafter “YU”). As per claim 8, CHAWLA discloses the system of claim 7. CHAWLA fails to disclose wherein the system is further caused to: down sample the set of cropped feature maps; and apply the local feature detection model on the current vertex and the corresponding cropped feature map of the down sampled set of cropped feature maps. YU discloses wherein the system is further caused to: down sample the set of cropped feature maps (the system is adapted to down sample the cropped feature maps of the target image; fig 5; paragraphs [0059-0060]); and apply the local feature detection model on the current vertex and the corresponding cropped feature map of the down sampled set of cropped feature maps (and further apply feature detection models to the down sampled images of the corresponding desired features respective feature map; fig 5, paragraphs [0059-0061], [0070-0073]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have apply the local feature detection model on the current vertex and the corresponding cropped feature map of the down sampled set of cropped feature maps of YU reference. The Suggestion/motivation for doing so would have been to provide a customized model for a specific customizable object as suggested by YU at paragraph [0072]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine YU with CHAWLA to obtain the invention as specified in claim 8. As per claim 9, CHAWLA discloses the system of claim 7. CHAWLA fails to disclose wherein, to determine the plurality of vertices, the system is further caused to: determine that a portion of the at least one road feature in a region included in a cropped feature map of the set of cropped feature maps is associated with a substantially straight geometry, wherein the cropped feature map is associated with a current vertex received as the predicted output from the local feature detection model; iteratively apply the local feature detection model on a previous vertex received as the predicted output, the current vertex and the corresponding cropped feature map associated with the current vertex, until the one or more stop conditions are met, and receive, as the predicted output of the local feature detection model, the determined plurality of vertices. YU discloses wherein, to determine the plurality of vertices, the system is further caused to: determine that a portion of the at least one road feature in a region included in a cropped feature map of the set of cropped feature maps is associated with a substantially straight geometry (the camera pose data includes at least two pose angles from pitch, roll, and yaw values trigonometric processing is performed on each of the at least two pose angles as well as on each of the plurality of fused pose angles to obtain a plurality of pose representation parameters which would include a straight line pose for a straight road feature; paragraphs [0052-0056]), wherein the cropped feature map is associated with a current vertex received as the predicted output from the local feature detection model (the system produces cropped feature maps which include skeletons and preset vertices included in the 3D model acting as the current vertices of the model; paragraphs [0047], [0056-0060]); iteratively apply the local feature detection model on a previous vertex received as the predicted output, the current vertex and the corresponding cropped feature map associated with the current vertex, until the one or more stop conditions are met (process of obtaining the adaptation information of a plurality of target vertices is repeated until the adaptation information of the plurality of target vertices indicates that the reference three-dimensional model meets the adaptation requirements, and should fall within an adaptation degree range which includes an upper and lower limit acting as stop conditions wherein the limits are user adjustable using a slider feature; paragraphs [0101], [0105], [0108]), and receive, as the predicted output of the local feature detection model, the determined plurality of vertices (and output the 3D model reconstruction of the target image using the user selected number of target vertices and in the example provided the output number of vertices by the model is 1600 based on extracted feature of the feature map; paragraphs [0025], [0034-0035], [0070]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have determine that a portion of the at least one road feature in a region included in a cropped feature map of the set of cropped feature maps is associated with a substantially straight geometry, wherein the cropped feature map is associated with a current vertex received as the predicted output from the local feature detection model and to iteratively apply that model of YU reference. The Suggestion/motivation for doing so would have been to provide the ability to select vertices amount as one of a number of parameters used for model control is not restricted and can also be flexibly set according to the accuracy and complexity of the model desired as suggested by YU at paragraph [0035]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine YU with CHAWLA to obtain the invention as specified in claim 9. As per claim 10, CHAWLA discloses the system of claim 7. CHAWLA fails to disclose wherein the system is further caused to: determine that a portion of the at least one road feature in a region included in a cropped feature map of the set of cropped feature maps is associated with a substantially curved geometry, wherein the cropped feature map is associated with a current vertex received as the predicted output from the local feature detection model; iteratively apply the local feature detection model on a set of previous vertices received as predicted outputs from the local feature detection model, the current vertex and the corresponding cropped feature map associated with the current vertex, until the one or more stop conditions are met, and receive, as the predicted output of the local feature detection model, the determined plurality of vertices. YU discloses wherein the system is further caused to: determine that a portion of the at least one road feature in a region included in a cropped feature map of the set of cropped feature maps is associated with a substantially curved geometry (the camera pose data includes at least two pose angles from pitch, roll, and yaw values trigonometric processing is performed on each of the at least two pose angles as well as on each of the plurality of fused pose angles to obtain a plurality of pose representation parameters which would include a non-straight line pose for a curved road feature and would change the angular values to properly track said feature; paragraphs [0052-0054]), wherein the cropped feature map is associated with a current vertex received as the predicted output from the local feature detection model (the system produces cropped feature maps which include skeletons and preset vertices included in the 3D model acting as the current vertices of the model; paragraphs [0047], [0056-0060]); iteratively apply the local feature detection model on a set of previous vertices received as predicted outputs from the local feature detection model, the current vertex and the corresponding cropped feature map associated with the current vertex, until the one or more stop conditions are met (process of obtaining the adaptation information of a plurality of target vertices is repeated until the adaptation information of the plurality of target vertices indicates that the reference three-dimensional model meets the adaptation requirements, and should fall within an adaptation degree range which includes an upper and lower limit acting as stop conditions wherein the limits are user adjustable using a slider feature; paragraphs [0101], [0105], [0108]), and receive, as the predicted output of the local feature detection model, the determined plurality of vertices (and output the 3D model reconstruction of the target image using the user selected number of target vertices and in the example provided the output number of vertices by the model is 1600 based on extracted feature of the feature map; paragraphs [0025], [0034-0035], [0070]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have determine that a portion of the at least one road feature in a region included in a cropped feature map of the set of cropped feature maps is associated with a substantially curved geometry, wherein the cropped feature map is associated with a current vertex received as the predicted output from the local feature detection model of YU reference. The Suggestion/motivation for doing so would have been to provide the ability to select vertices amount as one of a number of parameters used for model control is not restricted and can also be flexibly set according to the accuracy and complexity of the model desired as suggested by YU at paragraph [0035]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine YU with CHAWLA to obtain the invention as specified in claim 10. As per claim 11, CHAWLA discloses the system of claim 7. CHAWLA fails to disclose wherein the one or more stop conditions comprises: a distance between the subsequent vertex and the current vertex is less than or equal to a second threshold distance; or a distance between the subsequent vertex and an end point of a cropped feature map that corresponds to a boundary of the overhead image data is less than or equal to a third threshold distance. YU discloses wherein the one or more stop conditions comprises: a distance between the subsequent vertex and the current vertex is less than or equal to a second threshold distance (each distance between target vertices and a center point of a region of the feature map may be used as the second threshold distance by applying a user customizable limit where the upper and lower value of the limit range act as the threshold values; paragraphs [0083], [0095-0096], [0101], [0108]); or a distance between the subsequent vertex and an end point of a cropped feature map that corresponds to a boundary of the overhead image data is less than or equal to a third threshold distance (each distance between target vertices and a edge of a feature map of a region of the feature map may be used as the third threshold distance by applying a user customizable limit where the upper and lower value of the limit range act as the threshold values; paragraphs [0083], [0095-0096], [0101], [0108]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have a distance between the subsequent vertex and the current vertex is less than or equal to a second threshold distance of YU reference. The Suggestion/motivation for doing so would have been to once limits as desired by the user are applied the user may use the information as the final distance information which is then used as the adaptation degree information for the target vertex as suggested by YU paragraph [0096]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine YU with CHAWLA to obtain the invention as specified in claim 11. As per claim 12, CHAWLA discloses the system of claim 7. CHAWLA fails to disclose wherein, to generate the feature data, the system is further caused to: combine the generated mask image data and the determined plurality of vertices; merge overlapping vertices of the determined plurality of vertices; and connect vertices of the determined plurality of vertices having a distance less than or equal to a fourth threshold distance. YU discloses wherein, to generate the feature data, the system is further caused to: combine the generated mask image data and the determined plurality of vertices (to achieve more accurate feature extraction, the feature extraction network can combine image features and camera pose data for feature extraction; paragraph [0049]); merge overlapping vertices of the determined plurality of vertices (the target convolution parameters corresponding to each convolutional unit are obtained by applying re-parameterization techniques to merge the parameters of a plurality of branches of the model in order to simplify the feature extraction procedure and eliminate redundancies which includes overlapping feature extraction masks; paragraphs [0064]); and connect vertices of the determined plurality of vertices having a distance less than or equal to a fourth threshold distance (treat the plurality of triangular facets connected to the first vertex of a plurality as the corresponding region on the 3D model for the target vertex, and calculate plurality of distances from the target vertex to the triangular facets in the region of the feature map and generate the adaptation degree information for the target vertex based on the calculated distances and the user desired distance ranges set using the slider interface available to the user; paragraphs [0035], [0083], [0095-0096], [0101], [0108]). It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify CHAWLA to have wherein, to generate the feature data, the system is further caused to: combine the generated mask image data and the determined plurality of vertices; merge overlapping vertices of the determined plurality of vertices; and connect vertices of the determined plurality of vertices having a distance less than or equal to a fourth threshold distance of YU reference. The Suggestion/motivation for doing so would have been to once limits as desired by the user are applied the user may use the information as the final distance information which is then used as the adaptation degree information for the target vertex as suggested by YU paragraph [0096]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine YU with CHAWLA to obtain the invention as specified in claim 12. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000. /Devin Dhooge/ USPTO Patent Examiner Art Unit 2677 /ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677
Read full office action

Prosecution Timeline

Aug 09, 2023
Application Filed
Sep 23, 2025
Non-Final Rejection — §102, §103
Dec 23, 2025
Response Filed
Mar 14, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602773
Deep-Learning-based T1-Enhanced Selection of Linear Coefficients (DL-TESLA) for PET/MR Attenuation Correction
2y 5m to grant Granted Apr 14, 2026
Patent 12579780
HYPERSPECTRAL TARGET DETECTION METHOD OF BINARY-CLASSIFICATION ENCODER NETWORK BASED ON MOMENTUM UPDATE
2y 5m to grant Granted Mar 17, 2026
Patent 12524982
NON-TRANSITORY COMPUTER READABLE RECORDING MEDIUM, VISUALIZATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Jan 13, 2026
Patent 12517146
IMAGE-BASED DECK VERIFICATION
2y 5m to grant Granted Jan 06, 2026
Patent 12505673
MULTIMODAL GAME VIDEO SUMMARIZATION WITH METADATA
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.9%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 71 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month