Prosecution Insights
Last updated: April 19, 2026
Application No. 18/787,894

DATA ANALYTICS METHODS FOR SPATIAL DATA, AND RELATED SYSTEMS AND DEVICES

Non-Final OA §102§103
Filed
Jul 29, 2024
Examiner
SAMARA, HUSAM TURKI
Art Unit
2161
Tech Center
2100 — Computer Architecture & Software
Assignee
Datarobot Inc.
OA Round
1 (Non-Final)
55%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
74%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
90 granted / 164 resolved
At TC average
Strong +19% interview lift
Without
With
+18.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
26 currently pending
Career history
190
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
54.7%
+14.7% vs TC avg
§102
16.3%
-23.7% vs TC avg
§112
7.9%
-32.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 164 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to application filed on 29 July 2024. Claims 57-67, and 87-95 are pending in the case. Claims 57, 61, and 88 are the independent claims. This action is non-final. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 57-60 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Heinonen et al. (US 2021/0264245 A1). Regarding claim 57, Heinonen teaches an automated, spatially-aware feature engineering method, comprising: extracting geometric data from spatial data, the spatial data representing a plurality of spatial objects, the extracted geometric data characterizing one or more geometric elements of each of the spatial objects (see Heinonen, Paragraph [0074], “The method and the data computing environment described herein enables processing of the spatial data, specifically, geospatial data, which includes data related to one or more objects present in an environment (e.g. a real-world environment).” [Spatial data (i.e., geometric data characterizing one or more geometric elements of each of the spatial objects) is processed (i.e., extracted).]); extracting location data from the spatial data, the extracted location data indicating one or more sets of coordinates of one or more locations associated with each of the spatial objects (see Heinonen, Paragraphs [0075]-[0076], “ the spatial data refers to geospatial data comprising information about an environment, specifically, various objects present in the environment. The information relating to the various objects in the environment comprise, for example, numerical values in a geographic coordinate system. Optionally, the spatial data for the environment comprises information associated with geometry and/or a geographical location of each object in the environment.” [The spatial data, which refers to information associated with geographical location (i.e., location data indicating one or more set of coordinates) may be extracted.]); generating a dataset comprising a plurality of spatial observations representing the respective plurality of spatial objects, wherein each spatial observation includes (1) a respective value of a location feature indicating a set of coordinates of a representative location of the spatial object corresponding to the spatial observation, and (2) respective values of one or more other features (see Heinonen, Paragraphs [0076], [0085], “the spatial data is a spatial point cloud data or a spatial two-dimensional data. It will be appreciated that the spatial point cloud data comprises a set of datapoints that represent objects or space in the environment. In an example, the spatial point cloud data is the spatial two-dimensional data when set of datapoints of the spatial point cloud represent information in ‘X’ and ‘Y’ geometric coordinates. … in a case where the spatial data is the spatial point cloud data, each point of the spatial point cloud data have certain properties (or attributes), such as time, size, intensity, return number, pulse width, resolution, colour, and the like. ” [The spatial data comprises a set of data points (i.e., dataset) that represents objects, which may include a set of coordinates representing a location and other features.]); for each of the spatial observations, deriving respective values of one or more solitary spatial features based on a portion of the extracted geometric data characterizing the geometric elements of the spatial object represented by the spatial observation, and adding the values of the one or more solitary spatial features to the dataset (see Heinonen, Paragraphs [0088]-[0089], “the first sub-feature is created by selecting points from the first feature which are spatially correlated with a known geometry, such as a plane, a spline, a line. Optionally such known geometry is expressed as a primary component vectors and eigenvalues in a multidimensional space. For example, all points corresponding to a known 3D geometry of a plane representing a wall of a building shall be in the first sub-feature. For example, all points corresponding to a known geometry of a line representing a streetlamp pole shall be in the second sub-feature. For example, all points corresponding to a known geometry of a line representing a roof ridge in a 2-dimensional image shall be in the second sub-feature.” [The features, which are associated with a known geometry, such as a plane, a spline, or a line are expressed as a primary component vectors and eigenvalues (i.e., deriving respective values of one or more solitary spatial features based on a portion of the extracted geometric data characterizing the geometric elements of the spatial object) are created (i.e., adding the values of the one or more solitary spatial features to the dataset).]); and training one or more machine learning models by performing one or more machine learning processes on the dataset (see Heinonen, Paragraph [0115], “Referring to FIG. 4 there is shown an exemplary scenario 400 for classification of spatial data into a plurality of object classes using a trained deep neural network, in accordance with an embodiment of the present disclosure.” [The spatial data is trained by the deep neural network (i.e., machine learning models).]). Regarding claim 58, Heinonen further teaches: wherein the one or more solitary spatial features include a particular feature, wherein the respective value of the particular feature of a particular spatial observation indicates a length, area, shape, or direction of the spatial object represented by the particular spatial observation (see Heinonen, Paragraph [0088], “the first sub-feature is created by selecting points from the first feature which are spatially correlated with a known geometry, such as a plane, a spline, a line. Optionally such known geometry is expressed as a primary component vectors and eigenvalues in a multidimensional space. For example, all points corresponding to a known 3D geometry of a plane representing a wall of a building shall be in the first sub-feature. For example, all points corresponding to a known geometry of a line representing a streetlamp pole shall be in the second sub-feature. For example, all points corresponding to a known geometry of a line representing a roof ridge in a 2-dimensional image shall be in the second sub-feature.” [The features, which are associated with a known geometry, such as a plane, a spline, or a line are expressed as a primary component vectors and eigenvalues (i.e., wherein the respective value of the particular feature of a particular spatial observation indicates a shape, or direction of the spatial object).]). Regarding claim 59, Heinonen further teaches: wherein the one or more solitary spatial features include a particular feature, wherein the respective value of the particular feature of a particular spatial observation indicates a length, area, shape, or direction a geometric element of the spatial object represented by the particular spatial observation (see Heinonen, Paragraph [0088], “the first sub-feature is created by selecting points from the first feature which are spatially correlated with a known geometry, such as a plane, a spline, a line. Optionally such known geometry is expressed as a primary component vectors and eigenvalues in a multidimensional space. For example, all points corresponding to a known 3D geometry of a plane representing a wall of a building shall be in the first sub-feature. For example, all points corresponding to a known geometry of a line representing a streetlamp pole shall be in the second sub-feature. For example, all points corresponding to a known geometry of a line representing a roof ridge in a 2-dimensional image shall be in the second sub-feature.” [The features, which are associated with a known geometry, such as a plane, a spline, or a line are expressed as a primary component vectors and eigenvalues (i.e., wherein the respective value of the particular feature of a particular spatial observation indicates a shape, or direction of the spatial object).]). Regarding claim 60, Heinonen further teaches: wherein the one or more solitary spatial features include a particular feature, wherein the respective value of the particular feature of a particular spatial observation indicates a standard distance or a standard deviational ellipse of the spatial object represented by the particular spatial observation (see Heinonen, Paragraph [0102], “generating the hierarchy of the plurality of features further comprises projecting each point associated with each feature of the plurality of features into a local coordinate system within a feature. In an example, each point is aligned with the eigen vectors and optionally scaled to standard length.” [The features which are associated with the points are aligned with the eigen vectors and scaled to standard length (i.e., wherein the respective value of the particular feature of a particular spatial observation indicates a standard distance).]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 61-67, and 87-95 are rejected under 35 U.S.C. 103 as being unpatentable over Heinonen in view of Scarzanella (US 2020/0167614 A1). Regarding claim 61, Heinonen teaches an automated, spatially-aware feature engineering method, comprising: extracting location data from spatial data, the spatial data representing a plurality of spatial objects, the extracted location data indicating one or more sets of coordinates of one or more locations associated with each of the spatial objects (see Heinonen, Paragraphs [0074]-[0076], “ the spatial data refers to geospatial data comprising information about an environment, specifically, various objects present in the environment. The information relating to the various objects in the environment comprise, for example, numerical values in a geographic coordinate system. Optionally, the spatial data for the environment comprises information associated with geometry and/or a geographical location of each object in the environment.” [The spatial data, which refers to information associated with geographical location (i.e., location data indicating one or more set of coordinates) may be extracted.]); generating a dataset comprising a plurality of spatial observations representing the respective plurality of spatial objects, wherein each spatial observation includes (1) a respective value of a location feature indicating a set of coordinates of a representative location of the spatial object corresponding to the spatial observation, and (2) respective values of one or more other features (see Heinonen, Paragraphs [0076], [0085], “the spatial data is a spatial point cloud data or a spatial two-dimensional data. It will be appreciated that the spatial point cloud data comprises a set of datapoints that represent objects or space in the environment. In an example, the spatial point cloud data is the spatial two-dimensional data when set of datapoints of the spatial point cloud represent information in ‘X’ and ‘Y’ geometric coordinates. … in a case where the spatial data is the spatial point cloud data, each point of the spatial point cloud data have certain properties (or attributes), such as time, size, intensity, return number, pulse width, resolution, colour, and the like. ” [The spatial data comprises a set of data points (i.e., dataset) that represents objects, which may include a set of coordinates representing a location and other features.]); However, Heinonen does not explicitly teach: deriving a plurality of values of a relational spatial feature based on pairwise spatial relationships between the spatial observations; inserting the values of the relational spatial feature into the respective spatial observations; Scarzanella teaches: deriving a plurality of values of a relational spatial feature based on pairwise spatial relationships between the spatial observations; inserting the values of the relational spatial feature into the respective spatial observations (see Scarzanella, Paragraph [0039], “At block S346, a spatial relationship quantifying section, such as feature vector generating section 104 or a sub-section thereof, quantifies spatial relationships among the selected coordinate and the nearest neighbor coordinates. The spatial relationships can be quantified by distance, area, angle, or any other measurable aspect between two or more coordinates with the set of N coordinates. The spatial relationship quantifying section can refer to stored feature vector parameters, such as feature vector parameters 116 within storage section 110.” [Spatial relationships among the different coordinates may be quantified by distance (i.e., relational spatial feature based on pairwise spatial relationships).]); It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Heinonen (teaching method and system for processing spatial data) in view of Scarzanella (teaching object localization based on spatial relationships), and arrived at a method that incorporates spatial relationships. One of ordinary skill in the art would have been motivated to make such a combination for the purposes of increasing the precision of the locating function (see Scarzanella, Paragraph [0048]). In addition, both the references (Heinonen and Scarzanella) teach features that are directed to analogous art and they are directed to the same field of endeavor, such as spatial data. The close relation between both the references highly suggests an expectation of success. The combination of Heinonen, and Scarzanella further teaches: and training one or more machine learning models by performing one or more machine learning processes on the dataset (see Heinonen, Paragraph [0115], “Referring to FIG. 4 there is shown an exemplary scenario 400 for classification of spatial data into a plurality of object classes using a trained deep neural network, in accordance with an embodiment of the present disclosure.” [The spatial data is trained by the deep neural network (i.e., machine learning models).]). Regarding claim 62, Heinonen in view of Scarzanella teaches all the limitations of claim 61. Scarzanella further teaches: wherein deriving the values of the relational spatial feature comprises: for each pair of the spatial observations, determining a respective pairwise distance between the pair of spatial observations based on the values of the location features of the pair of spatial observations; for each of the spatial observations, identifying a set of neighboring observations among the plurality of spatial observations by applying a neighborhood function to the pairwise distances associated with the respective spatial observation; and for each of the spatial observations, determining the respective value of the relational spatial feature based on values of one or more features of the neighboring observations of the respective spatial observation (see Scarzanella, Paragraph [0039], “At block S346, a spatial relationship quantifying section, such as feature vector generating section 104 or a sub-section thereof, quantifies spatial relationships among the selected coordinate and the nearest neighbor coordinates. The spatial relationships can be quantified by distance, area, angle, or any other measurable aspect between two or more coordinates with the set of N coordinates. The spatial relationship quantifying section can refer to stored feature vector parameters, such as feature vector parameters 116 within storage section 110.” [Spatial relationships (i.e., relational spatial feature) among the coordinates and nearest neighbor coordinates may be quantified by distance (i.e., pairwise distance).]). Regarding claim 63, Heinonen in view of Scarzanella teaches all the limitations of claim 62. Scarzanella further teaches: wherein the pairwise distance between the pair of spatial observations is a function of the values of the location features of the pair of spatial observations (see Scarzanella, Paragraph [0039], “At block S346, a spatial relationship quantifying section, such as feature vector generating section 104 or a sub-section thereof, quantifies spatial relationships among the selected coordinate and the nearest neighbor coordinates. The spatial relationships can be quantified by distance, area, angle, or any other measurable aspect between two or more coordinates with the set of N coordinates. The spatial relationship quantifying section can refer to stored feature vector parameters, such as feature vector parameters 116 within storage section 110.” [A distance (i.e., pairwise distance) between coordinates (i.e., location features) of objects may be determined.]). Regarding claim 64, Heinonen in view of Scarzanella teaches all the limitations of claim 63. Scarzanella further teaches: wherein the function corresponds to a particular type of spatial relationship (see Scarzanella, Paragraph [0048], “At block S347, a quantified spatial relationship ordering section, such as feature vector generating section 104 or a sub-section thereof, orders the quantified spatial relationships within each type of quantified spatial relationship from largest to smallest.” [The quantified spatial relationship corresponds to a type.]). Regarding claim 65, Heinonen in view of Scarzanella teaches all the limitations of claim 62. Scarzanella further teaches: wherein the set of neighboring observations for at least one of the spatial observations is empty (see Scarzanella, Paragraph [0038], “At block S344, a nearest neighbor computing section, such as feature vector generating section 104 or a sub-section thereof, computes the nearest coordinates to the coordinate selected at S342. In some embodiments, the set of M 3D coordinates is input into a K-dimensional tree for fast lookup of the “nearest neighbor” coordinates.” [It is implied that there may be no nearest neighbor coordinates.]). Regarding claim 66, Heinonen in view of Scarzanella teaches all the limitations of claim 62. Scarzanella further teaches: wherein the relational spatial feature comprises a spatially lagged variable, a local indicator of spatial autocorrelation, an indication of spatial cluster membership, and/or a significance score (see Scarzanella, Paragraph [0053], “At block S553, a criteria selecting section, such as locating function producing section 106 or a sub-section thereof, selects criteria for the selected node, such as which spatial relationship of the feature vectors to consider, and the threshold value for separating the feature vectors into clusters.” [The spatial relationship (i.e., relational spatial feature) of the feature vectors may be clustered (i.e., an indication of spatial cluster membership).]). Regarding claim 67, Heinonen in view of Scarzanella teaches all the limitations of claim 62. Scarzanella further teaches: wherein the respective value of the relational spatial feature is further based on the pairwise distances between the respective spatial observation and the neighboring observations of the respective spatial observation (see Scarzanella, Paragraph [0039], “At block S346, a spatial relationship quantifying section, such as feature vector generating section 104 or a sub-section thereof, quantifies spatial relationships among the selected coordinate and the nearest neighbor coordinates. The spatial relationships can be quantified by distance, area, angle, or any other measurable aspect between two or more coordinates with the set of N coordinates. The spatial relationship quantifying section can refer to stored feature vector parameters, such as feature vector parameters 116 within storage section 110.” [A distance (i.e., pairwise distance) between selected coordinates and nearest neighbor coordinates may be determined.]). Regarding claim 87, Heinonen in view of Scarzanella teaches all the limitations of claim 61. Scarzanella further teaches: wherein, for each of the spatial objects, the representative location of the respective spatial object is a location of a central tendency of the respective spatial object (see Scarzanella, Paragraph [0048], “within the particular quantified relationships C concerning the distance between each coordinate and the centroid x of the set of N coordinates, the distances C are ordered from largest to smallest.” [The centroid (i.e., a location of a central tendency of the respective spatial object) is considered when determining the spatial relationships.]). Regarding claims 88-95, Heinonen in view of Scarzanella teaches all of the limitations of claims 61-67 and 87, in method form rather than in system form. Heinonen also discloses a system [0110]. Therefore, the supporting rationale of the rejection to claims 61-67 and 87 applies equally as well to those elements of claims 88-95. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUSAM TURKI SAMARA whose telephone number is (571)272-6803. The examiner can normally be reached on Monday - Thursday, Alternate Fridays. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Apu Mofiz can be reached on (571)-272-4080. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. HUSAM TURKI SAMARA/Examiner, Art Unit 2161 /APU M MOFIZ/Supervisory Patent Examiner, Art Unit 2161
Read full office action

Prosecution Timeline

Jul 29, 2024
Application Filed
Dec 30, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591581
PROGRAMMATIC DATA PROCESSING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12591570
SYSTEMS AND METHODS FOR FINDING NEAREST NEIGHBORS
2y 5m to grant Granted Mar 31, 2026
Patent 12541523
CONTEXT DRIVEN ANALYTICAL QUERY ENGINE WITH VISUALIZATION INTELLIGENCE
2y 5m to grant Granted Feb 03, 2026
Patent 12511299
OFFLINE EVALUATION OF RANKING FUNCTIONS
2y 5m to grant Granted Dec 30, 2025
Patent 12493602
MULTIHOST DATABASE HOST REMOVAL SHORTCUT
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
55%
Grant Probability
74%
With Interview (+18.7%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 164 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month