Prosecution Insights
Last updated: April 19, 2026
Application No. 18/495,235

SYSTEMS AND METHODS FOR DETERMINING DIFFERENCES BETWEEN VECTOR DATASETS

Final Rejection §101§103
Filed
Oct 26, 2023
Examiner
MAY, ROBERT F
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Woven By Toyota Inc.
OA Round
4 (Final)
76%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
216 granted / 286 resolved
+20.5% vs TC avg
Strong +30% interview lift
Without
With
+29.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
41 currently pending
Career history
327
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
45.6%
+5.6% vs TC avg
§102
18.0%
-22.0% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 286 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The Action is responsive to the Amendments and Remarks filed on 12/29/2025. Claims 1-18 and 21-22 are pending claims. Claims 1, 8, and 15 are written in independent form. Claims 19-20 have been cancelled. Claims 21-22 are newly added claims. Claim Interpretation Claims 1, 8 and 15 recite the phrase “to determine” which is being understood as the intent to determine, but is not actively performing any determining step/limitation. Examiner suggests to amend the claim limitations to recite all of the steps in a positive manner. Claims 1, 8, and 15 recite the limitation “comparing the signature keys in the first data structure with the signature keys in the second data structure to determine features that are present in the first vector dataset and not present in the second vector dataset and features that are present in the second vector dataset and not present in the first vector dataset” which is being interpreted to have a scope of “comparing the signature keys in the first data structure with the signature keys in the second data structure”. However, for the purpose of compact prosecution, the limitation is being addressed herein as if all of the steps are recited in a positive manner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-18 and 21-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claimed invention is directed to one or more abstract ideas without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below. As per Claims 1, 8, and 15, STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed method (claims 1-7), apparatus (claims 8-14), and method (claims 15-18) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1. STEP 2A Prong One:The independent claims 1, 8, and 15 recite the following limitations directed to an abstract idea: For each feature of the plurality of features in the first vector dataset and the second vector dataset, generating a signature key based on geometric attributes of each feature; The limitation recites a mathematical concept of executing a mathematical function that generates a signature key based on an input of a geometric attribute of a particular feature. Comparing the signature keys in the first data structure with the signature keys in the second data structure to determine features that are present in the first vector dataset and not present in the second vector dataset and features that are present in the second vector dataset and not present in the first vector dataset, The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating the signature keys in the first data structure and the signature keys in the second data structure, and based on the observation and evaluation, making a judgement and/or opinion of features that are present in the first vector dataset and not present in the second vector dataset and features that are present in the second vector dataset that are not present in the first vector dataset. Wherein comparing the signature keys comprises determining whether a first signature key in the first data structure matches a first signature key in the second data structure based on whether a threshold number of tokens between the first signature key in the first data structure and the first signature key in the second data structure match. The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating tokens in the first signature key in the first data structure, tokens in the first signature key in the second data structure, and a threshold number, based on the observation and evaluation, making a judgement and/or opinion that a threshold number of tokens between the first signature key in the first data structure and the first signature key in the second data structure match, and based on the determination that a threshold number of tokens match, making another judgement and/or opinion that the first signature key in the first data structure matches the first signature key in the second data structure.It is noted that Paragraph [0023] of the present specification provides an example signature key as merely being a set of numbers, such as “an example signature key is (-81.93519516600003, 28.849425929; -81.93478115899995, 28.85074578500013); 4; 578.1741659594979; 334.4290976623049; -2115.7274136543274)” which is understood as a representation of a signature key that a human could reasonable observe, evaluate, and compare against another similarly represented signature key. STEP 2A Prong Two:Claims 1, 8, and 15 recite “a processor” and “a memory”, which is a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Claims 1, 8, and 15 recite the following additional elements: Receiving, by a processor, the first vector dataset and the second vector dataset, The limitation recites an insignificant extra solution activity as retrieval of data (ie. Mere data gathering) such as ‘obtaining information’ as identified in MPEP 2106.05(g) and does not provide integration into a practical application. wherein each of the first vector dataset and the second vector dataset comprises a plurality of features contained within a respective high-definition LiDAR map of an environment of a vehicle, each feature being defined by at least one vertex; The limitation recites an insignificant extra-solution activity as selecting a particular type of data being received as feature data contained within a specific type of environment/structure as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Storing the signature key for each feature of the plurality of features of the first vector dataset in a first data structure; The limitation recites an insignificant extra-solution activity as retrieval of data (ie. Mere data gathering) by sending/storing data as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Storing the signature key for each feature of the plurality of features of the second vector dataset in a second data structure, The limitation recites an insignificant extra-solution activity as retrieval of data (ie. Mere data gathering) by sending/storing data as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Claims 1 and 8 further recites the following additional elements: Wherein each signature key comprises a plurality of tokens; The limitation recites an insignificant extra-solution activity as selecting a particular type of data being included in each signature key as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Claims 15 further recites the following additional elements: Each signature key comprising a plurality of tokens including a first vertex and a second vertex defining a bounding box, a number of vertices of the feature, a total azimuth value, a total length value, and a total elevation value, wherein: The limitation recites an insignificant extra-solution activity as selecting a particular type of data being included in the signature key as identified in MPEP 2106.05(g) and does not provide integration into a practical application. The total azimuth value comprises a summation of an azimuth between successive vertices of the feature; The limitation recites an insignificant extra-solution activity as selecting a particular type of data being represented as the “total azimuth value” as identified in MPEP 2106.05(g) and does not provide integration into a practical application. The total length value comprises a summation of a length of segments between successive vertices of the feature; and The limitation recites an insignificant extra-solution activity as selecting a particular type of data being represented as the “total length value” as identified in MPEP 2106.05(g) and does not provide integration into a practical application. The total elevation value comprises a summation of an elevation for each vertex; The limitation recites an insignificant extra-solution activity as selecting a particular type of data being represented as the “total elevation value” as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application. STEP 2B: The conclusions for the mere implementation using a computer are carried over and does not provide significantly more. With respect to “Receiving, by a processor, the first vector dataset and the second vector dataset,” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i). With respect to “wherein each of the first vector dataset and the second vector dataset comprises a plurality of features contained within a respective high-definition LiDAR map of an environment of a vehicle, each feature being defined by at least one vertex;” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to “Storing the signature key for each feature of the plurality of features of the first vector dataset in a first data structure;” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(i). With respect to “Storing the signature key for each feature of the plurality of features of the second vector dataset in a second data structure;” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i). With respect to Claims 1 and 8 reciting “Wherein each signature key comprises a plurality of tokens;” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claim 15 reciting “Each signature key comprising a plurality of tokens including a first vertex and a second vertex defining a bounding box, a number of vertices of the feature, a total azimuth value, a total length value, and a total elevation value,” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claim 15 reciting “The total azimuth value comprises a summation of an azimuth between successive vertices of the feature;” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claim 15 reciting “The total length value comprises a summation of a length of segments between successive vertices of the feature;” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claim 15 reciting “The total elevation value comprises a summation of an elevation for each vertex;” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). Looking at the claim as a whole does not change this conclusion and the claim is ineligible. As per Dependent Claims 2-7, 9-14, 16-18 and 21-22, STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed method (claims 1-7), apparatus (claims 8-14), and method (claims 15-18) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1. STEP 2A Prong One:The dependent claims 2-7, 9-14, and 16-18 recite the following limitations directed to an abstract idea: The limitation of Dependent Claims 3, 10, and 17 includes the step(s) of: Scanning an environment using a LiDAR scanning system, and The limitation recites a mathematical concept of executing a function via a LiDAR scanning system that scans an environment. Creating the respective high-definition LiDAR map based on results of the scan, The limitation recites a mathematical concept of executing a function that takes as input a LiDAR scan and outputs a high-definition LiDAR map. The limitation of Dependent Claims 4, 11, and 18 includes the step(s) of: Receiving a selection of a feature that is present in the first vector dataset and not present in the second vector dataset or present in the second vector dataset and not present in the first vector dataset; The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing particular features, either only present in the first vector dataset or only present in the second vector dataset, and making a judgement to choose or select one of the observed features. Adjusting vector data in at least one of the first vector dataset and the second vector dataset. The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by making a judgement and/or opinion to change vector data values in a first or second vector dataset. STEP 2A Prong Two:The claim(s) recite the following additional elements: The limitation of Dependent Claims 2, 9, and 16 includes the step(s) of: Displaying, in a graphical user interface, at least one of the features that are present in the first vector dataset and not present in the second vector dataset and the features that are present in the second vector dataset and not present in the vector dataset. The limitation recites a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. The limitation of Dependent Claims 3, 10, and 17 includes the step(s) of: wherein the first vector dataset and the second vector dataset are based on the respective high-definition LiDAR map. The limitation recites an insignificant extra-solution activity as selecting a particular type of data that the vector datasets are “based on”, or that the vector datasets comprise, as identified in MPEP 2106.05(g) and does not provide integration into a practical application. It is further noted that the independent claims 1, 8, and 15 already recite a variation of the scope by reciting “wherein each of the first vector dataset and the second vector dataset comprises a plurality of features contained within a respective high-definition LiDAR map” and are thus “based on” features contained within the respective high-definition LiDAR map The limitation of Dependent Claims 5, 12, and 21 includes the step(s) of: Wherein each signature key comprises at least one token including at least one azimuth value that is greater than or equal to zero and at least one length value that is greater than or equal to zero. The limitation recites an insignificant extra-solution activity as selecting a particular type of data being included in the signature key, and subsequently in at least one token of the signature key, as identified in MPEP 2106.05(g) and does not provide integration into a practical application. The limitation of Dependent Claims 6, 13, and 22 includes the step(s) of: Wherein each signature key comprises, for a corresponding feature, a plurality of tokens including a first vertex, a second vertex, a number of vertices, a total azimuth value, a total length value, and a total elevation value. The limitation recites an insignificant extra-solution activity as selecting a particular type of data being included in the signature key, and subsequently in the plurality of tokens comprising the signature key, as identified in MPEP 2106.05(g) and does not provide integration into a practical application. The limitation of Dependent Claims 7 and 14 includes the step(s) of: Wherein the first vertex and the second vertex define upper and lower corners of a bounding box surrounding the corresponding feature. The limitation recites an insignificant extra-solution activity as selecting a particular type of data being included in the first and second vertex as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application. STEP 2B: The conclusions for the mere implementation using a computer are carried over and does not provide significantly more. With respect to Claims 3, 10, and 17 reciting “wherein the first vector dataset and the second vector dataset are based on the respective high-definition LiDAR map” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claims 5, 12, and 21 reciting “Wherein each signature key comprises at least one token including at least one azimuth value that is greater than or equal to zero and at least one length value that is greater than or equal to zero.” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claims 6, 13, and 22 reciting “Wherein each signature key comprises, for a corresponding feature, a plurality of tokens including a first vertex, a second vertex, a number of vertices, a total azimuth value, a total length value, and a total elevation value.” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). With respect to Claims 7 and 14 reciting “Wherein the first vertex and the second vertex define upper and lower corners of a bounding box surrounding the corresponding feature.” identified as insignificant extra-solution activity above this is also considered to be WURC as court-identified see MPEP 2106.05(d)(II)(iv). Looking at the claim as a whole does not change this conclusion and the claim is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4 and 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Vianello (Pre-Grant Publication No. 2023/0230250) and further in view of Moustafa et al. (U.S. Pre-Grant Publication No. 2022/0126864, hereinafter referred to as Moustafa) and Burnett (U.S. Patent No. 12,242,892). Regarding Claim 1: Vianello teaches a method of determining changes between a first vector dataset and a second vector dataset, the method comprising: Receiving, by a processor, the first vector dataset and the second vector dataset, wherein each of the first vector dataset and the second vector dataset comprises a plurality of features contained within a respective high-definition LiDAR map, each feature being defined by at least one vertex; Vianello teaches comparing first and second object representations by “comparing feature vectors (e.g., extracted using the appearance change-agnostic model) and/or attribute vectors” (Para. [0109]) thereby teaching receiving a first and second vector dataset for the object representations for comparison, wherein the vector dataset comprises feature and/or attribute vectors. Vianello further teaches “the object information 20 can be from the same information provider…but can additionally and/or alternatively be from different information providers” with a modality such as LIDAR (Para. [0062]) and “each measurement can additionally and/or alternatively be depth information…, polygons, point clouds (e.g., from LIDAR…)” (Para. [0065]) thereby teaching that the feature can be defined by a vertex. The present specification states that “High-definition maps are three-dimensional maps that include features derived from vector datasets resulting from light detection and ranging (LiDAR) scans of an environment. The LiDAR scan results in a point cloud of LiDAR points within the environment” (Para. [0001]). Therefore, Vianello teaches the features contained within a respective high-definition LiDAR map by teaching “each measurement…can additionally and/or alternatively be point clouds (e.g., from LIDAR” (Para. [0065]) because point clouds are 3-d mappings which Vianello teaches are derived “from LIDAR”. For each feature of the plurality of features in the first vector dataset and the second vector dataset, generating a signature key based on geometric attributes of each feature; Vianello teaches “the object identifier can be: a hash of the object’s characteristics (e.g., geometric characteristics, visual characteristics, etc.), generated from the object representations associated with the object version” (Para. [0102]). Storing the signature key for each feature of the plurality of features of the first vector dataset in a first data structure; Vianello teaches “storage (e.g., configured to store the object representations, object versions, data associated with the object versions, etc.)” (Para. [0041]) thereby teaching storing the hash of the features of the data vector for each of the object representations or object versions. Storing the signature key for each feature of the plurality of features of the second vector dataset in a second data structure; and Vianello teaches “storage (e.g., configured to store the object representations, object versions, data associated with the object versions, etc.)” (Para. [0041]) thereby teaching storing the hash of the features of the data vector for each of the object representations or object versions. Comparing the first data structure with the second data structure to determine features that are present in the first vector dataset and not present in the second vector dataset and features that are present in the second vector dataset and not present in the first vector dataset. Vianello teaches “relationships between representations (and/or respective object versions) can be determined based on rules and heuristics; example shown in Fig. 9” (Para. [0129]) where Figure 9 shows an identification/determination of segments that are common or different between each of the object versions being compared as well as if the compared segments have been unchanged/same, modified, replaced, added/created, or removed/terminated. Vianello explicitly teaches all of the elements of the claimed invention as recited above except: a plurality of features contained within a respective high-definition LiDAR map of an environment of a vehicle; wherein each signature key comprises a plurality of tokens; Comparing the signature keys in the first data structure with the signature keys in the second data structure; and Wherein comparing the signature keys comprises determining whether a first signature key in the first data structure matches a first signature key in the second data structure based on whether a threshold number of tokens between the first signature key in the first data structure and the first signature key in the second data structure match. However, in the related field of endeavor of object point cloud comparison and changing environments, Moustafa teaches: a plurality of features contained within a respective high-definition LiDAR map of an environment of a vehicle; and Moustafa teaches determining a difference between “the data collected by the various sensors of an autonomous vehicle may be compared with data present in a relevant tile of the HD map downloaded to the autonomous vehicle. If there is a difference between the collected data and the HD map data, the delta (difference of the HD map tile and the newly collected data) may be transferred to the server hosting the HD map so that the HD map tile at that particular location may be updated” (Para. [0296]). Comparing the signature keys in the first data structure with the signature keys in the second data structure. Moustafa teaches “the data collected by the various sensors of an autonomous vehicle may be compared with data present in a relevant tile of the HD map downloaded to the autonomous vehicle. If there is a difference between the collected data and the HD map data, the delta (difference of the HD map tile and the newly collected data) may be transferred to the server hosting the HD map so that the HD map tile at that particular location may be updated” (Para. [0296]). Moustafa further teaches “the digital signature may be generated by hashing the sensor data and encrypting the hash using the private key.” (Para. [0770]) where “the network protocol…may use the public key to verify the digital signature (e.g., by unencrypting the hash using the public key and verifying the hashes match)” (Para. [0771]). Therefore, Moustafa teaches using signature keys representing the sensor data for comparing representations of the sensor data. Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Moustafa and Vianello at the time that the claimed invention was effectively filed, to have combined the detection of changes in driving environment features over time through sensor data comparison, as taught by Moustafa, with the systems and methods for detecting and comparing object attributes and measurements over time, as taught by Vianello. One would have been motivated to make such combination because Vianello teaches “object representations for objects within and/or encompassing the geographic region 10 are extracted and analyzed from object information depicting or associated with the geographic region to determine whether the object has changed over time.” (Para. [0040]) where “The objects are preferably physical objects, but can be any other object. The objects can be: structures (e.g., built structures, such as buildings, etc.), a portion of a structure (e.g., a building component, a roof, a wall, etc.), vegetation, manmade artifacts (e.g., pavement, driveways, roads, lakes, pools, etc.), and/or be any other suitable physical object” (Para. [0035]) and Moustafa teaches “if there is any change in the environment (for example, there is a road work, accident, etc.) the HD map should be updated to reflect the change. In some implementations, data from a number of autonomous vehicles may be crowdsourced and used to update the HD map.” (Para. [0295]) and it would be obvious to a person having ordinary skill in the art that using crowdsourced vehicle data to determine whether objects have changed over time would increase the ability to track changes over time by incorporating a higher number of sensors collecting data on the environment including the object(s). Moustafa and Vianello explicitly teach all of the elements of the claimed invention as recited above except: wherein each signature key comprises a plurality of tokens; Wherein comparing the signature keys comprises determining whether a first signature key in the first data structure matches a first signature key in the second data structure based on whether a threshold number of tokens between the first signature key in the first data structure and the first signature key in the second data structure match. However, in the related field of endeavor of data comparison, Burnett teaches: wherein each signature key comprises a plurality of tokens; Burnett teaches converting data into “a vector comprising one or more tokens” where “the individual tokens can be extracted and inserted as elements of a comparable data structure (e.g., a vector” (Col. 170 Lines 46-53). Wherein comparing the signature keys comprises determining whether a first signature key in the first data structure matches a first signature key in the second data structure based on whether a threshold number of tokens between the first signature key in the first data structure and the first signature key in the second data structure match. Burnett teaches converting data into “a vector comprising one or more tokens” where “the individual tokens can be extracted and inserted as elements of a comparable data structure (e.g., a vector” (Col. 170 Lines 46-53).Burnett further teaches “a number of tokens in a vector that match tokens of a first pattern are counted. For example, the pattern matcher(s) 3404 can walk through a string vector, token by token, and compare each token to the corresponding token in the first pattern” where “the number of matching tokens is compared to a threshold” and “a determination is made that the vector corresponds to the first pattern in response the number of matching tokens satisfying the threshold.” (Col. 171 Line 61 – Col. 172 Line 8). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Burnett, Moustafa, and Vianello at the time that the claimed invention was effectively filed, to have combined the vector generation using tokens obtained from raw data, as taught by Burnett, with the detection of changes in driving environment features over time through sensor data comparison, as taught by Moustafa, and the systems and methods for detecting and comparing object attributes and measurements over time, as taught by Vianello. One would have been motivated to make such combination because Moustafa teaches “the digital signature may be generated by hashing the sensor data” (Para. [0770]) and Burnett further clarifies the details of hashing sensor data that Moustafa is silent on by teaching “machine data can be raw machine data that is generated by various components in IT environments, such as servers, sensors, routers, mobile devices, Internet of Things (IoT) devices, etc. Machine data can include system logs, network packet data, sensor data, application program data, error logs, stack traces, system performance data, etc.” (Col. 8 Lines 25-44) where “the tokenizer 6902 can take the text 6901 comprised within ingested raw machine data and extract one or more tokens 6903 or fields from the text 6901” and “the vector generator 6904 can generate a vector 6905 using the token(s) 6903. For example, the vector generator 6904 can use an algorithm, such as hashing TF or CountVectorizer, to generate the vector 6905 using the token(s) 6903.” (Col. 198 Lines 1-12). Regarding Claim 2: Burnett, Moustafa, and Vianello further teach: Displaying, in a graphical user interface, at least one of the features that are present in the first vector dataset and not present in the second vector dataset and the features that are present in the second vector dataset and not present in the vector dataset. Vianello teaches “S400 can additionally include providing the analysis to an endpoint…through an interface, report, or other medium” (Para. [0153]) where “examples of analyses that can be determined include: the relationship between a first and second object version (e.g., the next object version, a prior object version, the current object version etc.); a timeseries of changes between a first and second object version (e.g., change type, change time, building development history, parcel development history, etc.); whether the requested object version is the most recent object version for the geographic region; the most recent object version for a requested geographic region; the timeframe or duration for a requested object version; data associated with a requested object version (e.g., measurements, object representations, feature vectors, attribute values, etc.); an analysis of the timeseries that the requested object version is a part of (e.g., number of changes, types of changes, average object version duration, etc.); an analysis of the object versions associated with a requested timeframe; an analysis of the auxiliary data associated with the object version (e.g., statistical analysis, lookup, prediction, etc.); anomaly detection (e.g., a temporary structure); and/or other analyses (e.g., examples shown in FIG. 4)” (Para. [0145]) Regarding Claim 3: Burnett, Moustafa, and Vianello further teach: Scanning an environment using a LiDAR scanning system, and Vianello teaches comparing first and second object representations by “comparing feature vectors (e.g., extracted using the appearance change-agnostic model) and/or attribute vectors” (Para. [0109]) thereby teaching receiving a first and second vector dataset for the object representations for comparison, wherein the vector dataset comprises feature and/or attribute vectors. Vianello further teaches “the object information 20 can be from the same information provider…but can additionally and/or alternatively be from different information providers” with a modality such as LIDAR (Para. [0062]) and “each measurement can additionally and/or alternatively be depth information…, polygons, point clouds (e.g., from LIDAR…)” (Para. [0065]) thereby teaching that the feature can be defined by a vertex. Moustafa also teaches “Combining multiple LIDAR data scans to increase their resolution. To the best of our knowledge, achieving super-resolution with LIDAR data is an entirely new field. [0838] Combining multiple camera images captured at a given limited dynamic range, to achieve higher dynamic range. [0839] Combining multiple camera images or multiple LIDAR scans to achieve noise reduction, e.g., suppressing noise present in each individual camera image or LIDAR scan. [0840] Combining camera and LIDAR images to achieve a higher detection rate of objects present in both modalities, but with independent “noise” sources.” (Paras. [0837]-[0840]). Therefore, Moustafa also teaches scanning an environment using a LiDAR scanning system that produces “LIDAR data scans”. creating the respective high-definition LiDAR map based on results of the scan, wherein the first vector dataset and the second vector dataset are based on the respective high-definition LiDAR map. Vianello teaches “each measurement…can additionally and/or alternatively be point clouds (e.g., from LIDAR” (Para. [0065]) thereby teaching creating the high-definition LiDAR map based on data collected “from LIDAR”. Vianello further teaches “comparing feature vectors (e.g., extracted using the appearance change-agnostic model) and/or attribute vectors” (Para. [0109]) where “the object information 20 can be from the same information provider…but can additionally and/or alternatively be from different information providers” with a modality such as LIDAR (Para. [0062]). Therefore, Vianello teaches the first and second vector datasets being based on object information and measurements collected from LIDAR and stored in a high-definition map such as a point cloud. Regarding Claim 4: Burnett, Moustafa, and Vianello further teach: Receiving a selection of a feature that is present in the first vector dataset and not present in the second vector dataset or present in the second vector dataset and not present in the first vector dataset; and Vianello teaches “the object representation 30 is preferably determined by an object representation model, but can alternatively be determined by another suitable model, be determined by a user, be retrieved by a database, or be otherwise determined” (Para. [0091]) thereby teaching a user selecting a feature by a user to be determined by the user in a first or second dataset for an object representation. Adjusting vector data in at least one of the first vector dataset and the second vector dataset. Vianello teaches “the object representation 30 is preferably determined by an object representation model, but can alternatively be determined by another suitable model, be determined by a user, be retrieved by a database, or be otherwise determined” (Para. [0091]) where an analysis can provided “to an endpoint (e.g., an endpoint on a network, customer endpoint, user endpoint, automated valuation model system, etc.) through an interface, report, or other medium” (Para. [0153]) and “when the object has changed (e.g., the relationship between the latest and new representations is indicative of change), the new representation can be stored as the latest representation for the object (and/or a new object version can be created)” (Para. [0151]). Therefore, Vianello teaches a user that can modify or adjust the object representation with features determined by a user. Regarding Claim 8: Some of the limitations herein are similar to some or all of the limitations of Claim 1. Burnett, Moustafa, and Vianello further teach: A processor (Vianello – Para. [0159]); and A memory storing instructions that, when executed by the processor, cause the apparatus to perform steps (Vianello – Para. [0159]). Regarding Claim 9: All of the limitations herein are similar to some or all of the limitations of Claim 2. Regarding Claim 10: All of the limitations herein are similar to some or all of the limitations of Claim 3. Regarding Claim 11: All of the limitations herein are similar to some or all of the limitations of Claim 4. Claim(s) 5-7, 12-18 and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Vianello, Moustafa, and Burnett, and further in view of Hougen (U.S. Pre-Grant Publication No. 2005/0273212). Regarding Claim 5: Burnett, Moustafa, and Vianello further teach: Wherein each signature key comprises at least one token including determined values. Burnett teaches converting data into “a vector comprising one or more tokens” where “the individual tokens can be extracted and inserted as elements of a comparable data structure (e.g., a vector” (Col. 170 Lines 46-53) and “a number of tokens in a vector that match tokens of a first pattern are counted. For example, the pattern matcher(s) 3404 can walk through a string vector, token by token, and compare each token to the corresponding token in the first pattern” where “the number of matching tokens is compared to a threshold” and “a determination is made that the vector corresponds to the first pattern in response the number of matching tokens satisfying the threshold.” (Col. 171 Line 61 – Col. 172 Line 8). Vianello teaches “the information determined for different object versions preferably has the same information modality (e.g., RGB, LIDAR, stereovision, radar, sonar, text, audio, etc.), but can additionally and/or alternatively have different information modalities” (para. [0062]) where “Each measurement…can additionally and/or alternatively be depth information (e.g., digital elevation model (DEM), digital surface model (DSM), digital terrain model (DTM), etc.), polygons, point clouds (e.g., from LIDAR, stereoscopic analyses, correlated images sampled from different poses, etc.), radar, sonar, virtual models, audio, video, and/or any other suitable measurement.” (Para.[0065]).Burnett in combination with Vianello teaches a signature key/vector comprising one or more tokens extracted and inserted as elements (Burnett) where the extracted elements are information/values determined from different information modalities such as “RGB, LIDAR, stereovision, radar, sonar, text, audio, etc.” (Vianello) Burnett, Moustafa and Vianello explicitly teaches all of the elements of the claimed invention as recited above except: Wherein each signature key comprises at least one token including at least one azimuth value that is greater than or equal to zero and at least one length value that is greater than or equal to zero. However, in the related field of endeavor of object classification for a vehicle, Hougen teaches: Wherein each signature key comprises at least one token including at least one azimuth value that is greater than or equal to zero and at least one length value that is greater than or equal to zero. Vianello teaches “the information determined for different object versions preferably has the same information modality (e.g., RGB, LIDAR, stereovision, radar, sonar, text, audio, etc.), but can additionally and/or alternatively have different information modalities” (para. [0062]) where “Each measurement…can additionally and/or alternatively be depth information (e.g., digital elevation model (DEM), digital surface model (DSM), digital terrain model (DTM), etc.), polygons, point clouds (e.g., from LIDAR, stereoscopic analyses, correlated images sampled from different poses, etc.), radar, sonar, virtual models, audio, video, and/or any other suitable measurement.” (Para.[0065]). Hougen teaches “Radar provides derived measurements such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art” (Para. [0034]). Hougen further teaches “the object detection signals are utilized to compute derived measurements such as object relative range, azimuth angle, velocity, and bearing information, as well as other object information known in the art” (Para. [0035]). Since azimuth angles can only be positive, negative, or zero, and relative ranges can only be positive, negative, or zero, Hougen teaches at least one azimuth value as being is greater than or equal to zero when it is positive and at least one length value as being greater than or equal to zero when it is positive. Therefore, Vianello teaches determining measurements/values from different sources from same or different information modalities (RGB, LIDAR, stereovision, radar, sonar, text, audio, etc.) and Hougen is teaching that these determined measurements/values can further include “derived measurements” provided from a radar “such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art” (Para. [0034]). Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Hougen, Burnett, Moustafa, and Vianello at the time that the claimed invention was effectively filed, to have combined the inclusion of azimuth angles of objects as taught by Hougen with the vector generation using tokens obtained from raw data, as taught by Burnett, the detection of changes in driving environment features over time through sensor data comparison, as taught by Moustafa, and the systems and methods for detecting and comparing object attributes and measurements over time, as taught by Vianello. One would have been motivated to make such combination because while Vianello teaches using RGB imaging data, depth information, and LiDAR data to collect object information (Figure 2), Hougen provides additional attributes/measurements to be collected and used for comparison, such as “radar provides derived measurements such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art.” (Para. [0034]), and it would be obvious to a person having ordinary skill in the art that further attributes/measurements for comparison would improve the ability to determine similarities and differences between objects at particular points in time. Regarding Claim 6: Hougen, Burnett, Moustafa, and Vianello further teach: Wherein each signature key comprises, for a corresponding feature, a plurality of tokens including a first vertex, a second vertex, a number of vertices, a total azimuth value, a total length value, and a total elevation value. Burnett teaches converting data into “a vector comprising one or more tokens” where “the individual tokens can be extracted and inserted as elements of a comparable data structure (e.g., a vector” (Col. 170 Lines 46-53) thereby teaching the vector/key comprising a plurality of tokens (more than one tokens) representing extracted and inserted elements. Hougen teaches extracted elements including “a small rectangular region” that include different objects (Paras. [0060]-[0061]) where “once a sum-array has been generated and, for example, is in rectangular form, the sum of the values in the sum-array can be calculated from the values associated with the four corners of the rectangle” (Para. [0077]) thereby teaching a first and a second vertex, and a number of vertices as features. Hougen further teaches other extracted elements as including “radar provides derived measurements such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art” (Para. [0034]) thereby teaching a total azimuth value, a total length value, and a total elevation value. Regarding Claim 7: Hougen, Burnett, Moustafa, and Vianello further teach: Wherein the first vertex and the second vertex define upper and lower corners of a bounding box surrounding the corresponding feature. Hougen teaches features including “a small rectangular region” that include different objects (Paras. [0060]-[0061]) where “once a sum-array has been generated and, for example, is in rectangular form, the sum of the values in the sum-array can be calculated from the values associated with the four corners of the rectangle” (Para. [0077]) thereby teaching a first and a second vertex as upper and lower corners of a bounding box/rectangle surrounding features of an object. Regarding Claim 12: All of the limitations herein are similar to some or all of the limitations of Claim 5. Regarding Claim 13: All of the limitations herein are similar to some or all of the limitations of Claim 6. Regarding Claim 14: All of the limitations herein are similar to some or all of the limitations of Claim 7. Regarding Claim 15: Some of the limitations herein are similar to some or all of the limitations of Claims 1, 6, 7. Hougen, Burnett, Moustafa, and Vianello further teach: Wherein the total azimuth value comprises a summation of an azimuth between successive vertices of the feature; Hougen teaches “the attributed data includes a set of sum-arrays based on a set of features” (Para. [0039]) where “A sum-array may be generated and used to represent the squared values, which eases and decreases the amount of time in the calculation of the denominator of such normalized feature values. Feature values may refer to the sizes, shapes, colors, and edge characteristics of an object, as well as other feature values known in the art.” (Para. [0041]). Hougen further “Radar provides derived measurements such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art” (Para. [0034]) thereby teaching the azimuth as a feature. Wherein the total length value comprises a summation of a length of segments between successive vertices of the feature; and Hougen teaches “the attributed data includes a set of sum-arrays based on a set of features” (Para. [0039]) where “A sum-array may be generated and used to represent the squared values, which eases and decreases the amount of time in the calculation of the denominator of such normalized feature values. Feature values may refer to the sizes, shapes, colors, and edge characteristics of an object, as well as other feature values known in the art.” (Para. [0041]). Hougen further “Radar provides derived measurements such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art” (Para. [0034]) thereby teaching the range/length as a feature. The total elevation value comprises a summation of an elevation for each vertex; Hougen teaches “the attributed data includes a set of sum-arrays based on a set of features” (Para. [0039]) where “A sum-array may be generated and used to represent the squared values, which eases and decreases the amount of time in the calculation of the denominator of such normalized feature values. Feature values may refer to the sizes, shapes, colors, and edge characteristics of an object, as well as other feature values known in the art.” (Para. [0041]). Hougen further “Radar provides derived measurements such as range, range rate, azimuth angle, elevation, and approximate size of an object, as well as other information: known in the art” (Para. [0034]) thereby teaching elevation as a feature. Regarding Claim 16: All of the limitations herein are similar to some or all of the limitations of Claim 2. Regarding Claim 17: All of the limitations herein are similar to some or all of the limitations of Claim 3. Regarding Claim 18: All of the limitations herein are similar to some or all of the limitations of Claim 4. Regarding Claim 21: All of the limitations herein are similar to some or all of the limitations of Claim 5. Regarding Claim 22: All of the limitations herein are similar to some or all of the limitations of Claim 6. Response to Amendment Applicant’s Amendments, filed on 12/29/2025, are acknowledged. Response to Arguments On page 10 of the Remarks filed on 12/29/2025 and in relation to the 101 rejection, Applicant argues that generating a signature key based on geometric attributes of each feature recites “no mathematical formula, neither in notation nor prose, sufficient to invoke classification as such under the MPEP's categorization of mathematical concepts. Accordingly, Applicant respectfully disagrees and traverses, submitting the that claims do not recite a mathematical concept and thus would answer "No" at Alice Mayo Step 2A Prong I, finding eligibility without requiring any further analysis.”.Applicant’s argument is not convincing because a signature key is understood as requiring a mathematical equation/formula to be generated, and therefore the limitation is understood as reciting a mathematical concept of executing a mathematical function that generates a signature key based on an input of a geometric attribute of a particular feature. On pages 10-11 of the Remarks filed on 12/29/2025 and in relation to the 101 rejection, Applicant argues that “on pages 4-5 of the outstanding Action, the Office asserts that various recitations of the claims recite "a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind, or by a human using a pen and paper, by observing and evaluating" signature keys, tokens, or other claim elements.” and that “between the size of the data structures involved and the need to derive these differences quickly, Applicant respectfully contends that it is not practicable for a human to perform these steps using only pencil and paper within any time frame that has any reasonable utility. Accordingly, Applicant respectfully disagrees and traverses, submitting the that claims do not recite a mental process and thus would answer "No" at Alice Mayo Step 2A Prong I, finding eligibility without requiring any further analysis.”Applicant’s argument is not convincing for at least the reason that the claims are being given their broadest reasonable interpretation which does not place a “size of the data structures” limitation, and thus is not a factor when being considered as to whether or not a human could practically perform the steps. On pages 12-13 of the Remarks filed on 12/29/2025 and in relation to the 103 rejection, Applicant argues that “paragraph [0296] of Moustafa is silent with respect to comparing hashed sensor data, signature keys, or the like - paragraph [0296] only discloses that "the data collected by the various sensors of an autonomous vehicle may be compared with data present in a relevant tile of the HD map downloaded to the autonomous vehicle."” and “the concept of signature keys does not appear again in Moustafa until paragraph [0770], which is also cited and relied upon by the Office. However, paragraph [0770] is silent with respect to comparing two potentially different datasets. Rather, paragraph [0770] concerns an altogether different and unrelated context - communication protocol security for the transmission of data from sensors of an autonomous vehicle to a logic unit of the autonomous vehicle. Moustafa states here that, "the digital signature may be used on a secure private key...In some cases, the digital signature may be generated by hashing the sensor data and encrypting the hash using the private key." Paragraph [0771] which continues with these concepts, is solely directed to network security - not data comparison.” concluding that “The Office's conclusion on page 18 that "Moustafa teaches using signature keys representing the sensor data for comparing representations of the sensor data" therefore has no basis in Moustafa. The Office endeavors here to combine two separate teachings in a way neither recognized nor used by Moustafa. Thus, Applicant respectfully submits that Moustafa fails to teach or suggest the recitation of the claims, "comparing the signature keys in the first data structure with the signature keys in the second data structure." The only source of such a teaching of record is Applicant's own specification.”Applicant’s argument is not convincing because paragraphs [0296] and [0770]-[0771] of Moustafa both teach the comparison of sensor data where [0296] teaches comparing for differences between two data sets (the HD map tile comprising previously collected sensor data and newly collected sensor data) and [0770]-[0771] further provides information on how a comparison of data sets might be performed (hashing the sensor data and encrypting the hash using the private key and then unencrypting the hash using the public key and verifying the hashes match). On page 13 of the Remarks filed on 12/29/2025 and in relation to the 101 rejection, Applicant argues that “In regards to each signature key comprising tokens, the Office cites on page 19 of the outstanding Action to Burnett as allegedly curing this deficiency, and more specifically to col. 170:46-53 of Burnett, which states that "raw machine data is converted into a vector comprising one or more tokens. For example, the raw machine data can be job manager and/or task manager logs and/or other type(s) of application logs that are ingested and parsed to identify delimiters in the data. The delimiters may be considered[sic] to separate tokens, and the individual tokens can be extracted and inserted as elements of a comparable data structure (e.g., a vector, such as a string vector)."” and “As a reminder, and as noted above, the Office relies on Vianello's disclosures regarding hashing sensor data to obtain object identifiers in order to reach Applicant's claimed signature keys. The Office's proposed combination with Burnett asserted here would therefore require tokenizing hashed sensor data. Burnett does not disclose such a procedure, nor does Burnett provide any motivation to perform such a procedure once the sensor data has been hashed. Burnett similarly does not present any reasonable expectation of success arising from the same.” Applicant’s argument is not convincing because while Applicant is presuming the combination would result in tokenizing hashed sensor data, it is also possible to hash tokenized sensor data. Further, Vianello actually teaches to the latter, and away from Applicant’s argument, because Burnett teaches tokenizing raw data and inserting the tokenized data as elements of a comparable data structure such as a vector, and Vianello teaches “a hash of the object's characteristics (e.g., geometric characteristics, visual characteristics, etc.), generated from the object representations associated with the object version” (para. [0102]) thereby pre-existing representations/data structures comprising the data first before performing a hashing of the object’s characteristics is performed. On page 14 of the Remarks filed on 12/29/2025 and in relation to the 101 rejection, Applicant argues that “A person having ordinary skill in the art would recognize that hashing is a process that specifically obscures data and is one-way. In effect, any geometric data contained within the sensor data would be destroyed in such a process, which is why Moustafa's only use of the hashed data is for network security protocols. As such, while one could compare two similarly hashed datasets to identify if the datasets are identical, once these datasets are hashed, that comparison cannot tell you what, specifically, is different between the two datasets. The Office's proposed combination here would therefore render it impossible to perform Applicant's claimed step, "comparing the signature keys in the first data structure with the signature keys in the second data structure to determine features that are present in the first vector dataset and not present in the second vector dataset and features that are present in the second vector dataset and not present in the first vector dataset."”Applicant’s argument is not convincing because contrary to Applicant’s assumption that the data is necessarily destroyed and therefore not accessible for any future steps, Moustafa instead teaches appending the digital signature to the sensor data (Para. [0770]). On pages 14-15 of the Remarks filed on 12/29/2025 and in relation to the 101 rejection, Applicant argues that “Finally, notably absent from Burnett is any mention of LIDAR data, sensor data of a similar kind, or the tokenization of geometric information stored in such a dataset to form signature keys. The Office's citations to Burnett on page 20 of the Action capture that Burnett's disclosures pertain to raw machine data from IT environments and that the text of this machine data is tokenized, yet these distinctions are not highlighted by the Office. Burnett is similarly silent regarding the comparison of signature keys comprising tokens obtained from a first dataset with signature keys comprising tokens obtained from a second dataset, LIDAR or otherwise, given that Burnett's patterns are user-entered.” Applicant’s argument is moot because Burnett has already been cited as mentioning the raw machine data generated by “sensors” and the machine data “can include…sensor data” (Col. 8 Lines 25-44) and the particular comparison of signature keys/vectors comprising the comparison of tokens in the signature keys/vectors each comprising one or more tokens representing different pieces of data (Col. 170 Lines 46-53 and Col. 171 Line 61 – Col. 172 Line 8); the type of sensor data is merely a choice of data source that is also already taught by both Vianello (at least in Para. [0062]) and Moustafa (at least in Para. [0184]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Portail et al. (U.S. Patent No. 12,100,159) teaches determining a timeseries of measurements of a geographic region; determining a set of object representations from the timeseries of measurements; and determining a timeseries of object versions based on relationships between the object representations.The reference further teaches “a hash of the object's characteristics (e.g., geometric characteristics, visual characteristics, etc.), generated from the object representations associated with the object version” and “determining whether the object representations represent the same object version can include comparing feature vectors (e.g., extracted using the appearance change-agnostic model) and/or attribute vectors. In this variant, the object representations can be considered to represent the same object version when the vectors match (e.g., exactly or above a threshold similarity). However, the feature vectors can be otherwise compared.” Kristensen et al. (U.S. Pre-Grant Publication No. 2021/0286923) teaches a sensor model may be learned to predict virtual sensor data for a given scene configuration. For example, a sensor model may include a deep neural network that supports generative learning—such as a generative adversarial network (GAN). The sensor model may accept an encoded representation of a scene configuration as an input using any number of data structures and/or channels (e.g., concatenated vectors, matrices, tensors, images, etc.), and may output virtual sensor data. Real-world data and/or virtual data may be collected and used to derive training data, which may be used to train the sensor model to predict virtual sensor data for a given scene configuration. As such, one or more sensor models may be used as virtual sensors in any of a variety of applications, such as in a simulated environment to test features and/or functionality of one or more autonomous or semi-autonomous driving software stacks.The reference further teaches “real-world data and/or virtual data may be collected from RADAR and/or LIDAR sensor(s) and used to encode the existence of reflections and values for the reflections such as bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, Doppler velocity, RADAR cross section (RCS), reflectivity, signal-to-noise ratio, some combination thereof, and/or the like” (Para. [0028]). Xie et al. (U.S. Pre-Grant Publication No. 2023/0384448) teaches a LiDAR system that improves target object distance detection by mitigating internal reflections and glare artifacts. The system includes a laser source, an optical module, a pixel circuit, a time-to-digital converter (TDC), a memory device, and a processor module. The processor module uses two data arrays to determine two vectors and utilizes a similarity function, such as the Pearson correlation coefficient, to identify and remove artifact peaks from the data. These artifact peaks may be associated with glare, semi-transparent objects, or internal reflections within lens elements. Dal Mutto et al. (U.S. Pre-Grant Publication No. 2019/0096135) teaches a system for visual inspection includes: a scanning system configured to capture images of an object and to compute a three-dimensional (3-D) model of the object based on the captured images; an inspection system configured to: compute a descriptor of the object based on the 3-D model of the object; retrieve metadata corresponding to the object based on the descriptor; and compute a plurality of inspection results based on the retrieved metadata and the 3-D model of the object; and a display device system including: a display; a processor; and a memory storing instructions that, when executed by the processor, cause the processor to: generate overlay data from the inspection results; and show the overlay data on the display, the overlay data being aligned with a view of the object through the display. Kirchner (U.S. Pre-Grant Publication No. 2020/0050901) teaches identifying a defined object (e.g., hazard): a sensor detecting and defining a digital representation of an object; a processor (connected to the sensor) which executes two techniques to identify a signature of the defined object; a memory (connected to the processor) storing reference data relating to two signatures derived, respectively, by the two techniques; responsive to the processor receiving the digital representation from the sensor, the processor executes the two techniques, each technique assessing the digital representation to identify any signature candidate defined by the object, derive feature data from each identified signature candidate, compare the feature data to the reference data, and derive a likelihood value of the signature candidate corresponding with the respective signature; combining likelihood values to derive a composite likelihood value and thus determine whether the object in the digital representation is the defined object.The reference further teaches “a comparator, typically being a classifier or a finding algorithm, to compare each derived feature vector (a) with the feature variance distribution defined by the reference data” (Para. [0106]). Baxley et al. (U.S. Pre-Grant Publication No. 2016/0127931) teaches determining a physical position of a radio transmitter. A physical model for electromagnetic signal propagation within the electromagnetic environment may be established. Radio frequency signal power levels associated with the radio transmitter may be received from one or more radio frequency sensors. Parameters associated with the physical model may be estimated for one or more test locations within the electromagnetic environment. An error metric between the received radio frequency signal power levels and the physical model may be computed for the one or more test locations. Bounds on the parameters associated with the physical model may be established to prune away physically impossible solutions. The parameters associated with the physical model may be optimized across the one or more test locations to establish a preferred location estimate for the radio transmitter.The reference further teaches “The device classification module 370 can further perform anomaly analysis, which compares the features associated with each data throughput feature vector into an aggregate metric.” (Para. [0128]). Kilaru (U.S. Pre-Grant Publication No. 2024/0194068) teaches one or more of determining a vehicle in a group of vehicles that has an unobstructed view of an area ahead of the group, collecting data from sensors on the vehicle, sending the data to one or more other vehicles in the group of vehicles, and providing the unobstructed view of the area ahead of the group on a display of the one or more other vehicles.The reference further teaches “individually comparing each feature from the new image to this database to find a candidate that may match features based on a Euclidean distance of their feature vectors.” (Para. [0074]). Bergen (U.S. Pre-Grant Publication No. 2021/0073571) teaches point clouds of objects are compared and matched using logical arrays based on the point clouds. The point clouds are azimuth aligned and translation aligned. The point clouds are converted into logical arrays for ease of processing. Then the logical arrays are compared (e.g. using the AND function and counting matches between the two logical arrays). The comparison is done at various quantization levels to determine which quantization level is likely to give the best object comparison result. Then the object comparison is made. More than two objects may be compared and the best match found.The reference further teaches “Azimuth aligning may be performed by finding a center of gravity of each point cloud, generating weighted sums of each point cloud by determining the number of points along various angular directions from the center of gravity, generating a characteristic signature of each point cloud based upon its weighted sums, cross correlating the two characteristic signatures and choosing the angular alignment between the point clouds” (Para. [0015]).a Singh et al. (U.S. Pre-Grant Publication No. 2024/0127596) teaches a perception system may be used to generate bounding boxes for objects in a vehicle scene. The perception system may receive images and feature maps corresponding to the received images. The perception system may generate scene dependent radar-based object queries. The perception system may use the generated scene dependent radar-based object queries and scene independent object queries to generate one or more bounding boxes for objects in the vehicle scene.The reference further teaches “the radar detection stage 507 may determine positions of objects based on certain ranges and azimuth angles of radar points of the returned radio waves that satisfy the signature condition” (Para. [0117]). Duan et al. (U.S. Pre-Grant Publication No. 2024/0196124) teaches a microphone array and a processor. The microphone array detects first sound signal and the second sound signal. Each of the first and second sound signals has a particular frequency band. Each of the first and second sound signals is originated from a particular sound source. The processor receives the first and second sound signals. The processor amplifies the first sound signals, each with a different amplification order. The processor disregards the second sound signal, where the second sound signal includes interference noise signals. The processor determines that the first sound signals indicate that a vehicle is within a threshold distance from an autonomous vehicle and traveling in a direction toward the autonomous vehicle. In response, the processor instructs the autonomous vehicle to perform a minimal risk condition operation. The minimal risk condition operation includes pulling over or stopping the autonomous vehicle.The reference further teaches “The control device 550 may compare the feature vector with each vector associated with the sound data samples included in the training dataset. The control device 550 may determine a Euclidean distance between the determined feature vector and each feature vector associated with the sound sample data included in the training dataset” (Para. [0109]). Banerjee et al. (U.S. Pre-Grant Publication No. 2019/0082103) teaches receiving a plurality of images from a first camera with a first field of view and a second plurality of images from a second camera with a second field of view. An overlapping region exists between the first field of view and the second field of view. The method also includes predicting a disparity of a moving object present in a first image of the first plurality of images. The moving object is not present in a corresponding second image of the second plurality of images. The method further includes determining warp vectors based on the predicted disparity. The method additionally includes combining an image from the first plurality of images with an image from the second plurality of images based on the determined warp vectors.The reference further teaches “Detected features may be compared between the rectangular images to determine vectors (e.g., distances) between the features. The vectors may be indicate and/or may be utilized to determine 1110 the disparity (and/or depth)” (Para. [0135]). Ishigami et al. (U.S. Pre-Grant Publication No. 2024/0219582) teaches “the reference further teaches in Step S1026, the angular velocity sensor correcting means 146 corrects a 0-point (also referred to as a bias) of the angular velocity sensor 144 based on a difference between an azimuth obtained by summing momentary yaw angles and a subject vehicle azimuth corrected by the hybrid positioning means 148, using a subject vehicle azimuth at a given time during the traveling of the subject vehicle as an initial value. The angular velocity sensor correcting means 146 can perform this correction using the correction method described in, for example, Japanese Patent No. 3321096 and Japanese Patent No. 3727489. After Step S1026, the processes return to Step S1001 in FIG. 11.” (Para. [0198]). And the use of LiDAR (Para. [0254]). Kakeda et al. (U.S. Pre-Grant Publication No. 2022/0161794) teaches a vehicle control device includes: a recognition part recognizing a first road marking line on a first side of a lane traveled by a vehicle; an obtaining part obtaining information on a second road marking line on the first side of the lane from map information; and a support part that executes a support processing for preventing the vehicle from deviating from a road marking line when a probability that the vehicle deviates from the road marking line is greater than or equal to a predetermined degree. When a degree of matching between a first position of the first road marking line and a second position of the second road marking line is less than or equal to a first threshold value, the support part suppresses execution of the support processing for preventing the vehicle from deviating from a road marking line on the first side.The reference further teaches “in the equation (1), the support part 146 divides the total of the azimuth differences (Σw[i]Δθ[i]) for each corresponding sampling point by the number of corresponding sampling points (Σw[i]) to calculate the average difference (Δθave). The azimuth difference for each corresponding sampling point may be weighted based on the position of the sampling point. For example, the azimuth differences of the sampling points near the vehicle M may be given a heavier weight than the azimuth differences of the sampling points far from the vehicle M.” (Para. [0096]). Kalita et al. (U.S. Pre-Grant Publication No. 2023/0108406) teaches a method for detecting, localizing, reporting, and displaying potholes on a road, in a system comprising a sensing device mounted on a vehicle, a cartography display, and a control unit, the sensing device comprising at least a radar device, the method comprising scanning with the sensing device an area of interest in front of and ahead the vehicle, the area of interest including at least a surface of a road traveled by the vehicle, the sensing device outputting a data flow; identifying first candidate potholes formed on the road surface; further processing the data flow to find out first confirmed potholes among the first candidates potholes; allocating a geolocation to each of the first confirmed potholes; and displaying, on the cartography display, first potholes with their localization superimposed on the map.The reference further teaches “the angular scan domain is symmetrical with regard to the longitudinal axis (left/right symmetry). There may be a proportional link 660 between azimuth and elevation, wherein the total azimuthal range is a function of a elevation (negative), as illustrated at FIG. 6.” (Para. [0104]). League et al. (U.S. Patent No. 5,587,929) teaches tracking objects first receives returns from objects in the field of view of a detector. The sensor generates a current frame of datapoints, where an object in the field of view can be represented by multiple datapoints. The datapoints are converted into global coordinates and mapped into a next frame of datapoints and generated at a next sample time to cream a new current frame of datapoints. This new current frame of datapoints is processed to form a list of objects with location and power information that includes information from one or more previous frames. This mapping and processing to form the list of objects allows the system to detect weak signal targets in a ground cluttered environment and minimizes the occurrence of false alarms.The reference further teaches “In one embodiment of blob coloring the label (described as a 1 or 2 in the above description) that is used for an object is used as the actual software pointer (computer memory position of the information) to a structure of information about the blob. This information can include the near right range and azimuth of the blob, the far left range and azimuth of the blob, the centroid, the sum of the power of the datapoints in the blob, the number of points in the blob, and the maximum power of a datapoint in the blob. By using this technique, the list of blobs and their relevant information is completed. This list of blob pointers is searched to determine when to slow down or stop the vehicle.” (Col. 12 Lines 28-39). Unnikrishnan et al. (U.S. Pre-Grant Publication No. 2020/0218913) teaches techniques for determining a motion state of a target object. In an aspect, an on-board computer of an ego vehicle detects the target object in one or more images, determines one or more first attributes of the target object based on measurements of the one or more images, determines one or more second attributes of the target object based on measurements of a map of a roadway on which the target object is travelling, and determines the motion state of the target object based on the one or more first attributes and the one or more second attributes of the target object.The reference further teaches “this cost is computed as a weighted sum of the difference in azimuth, and difference in range of the camera and radar detections. The height of the camera 2D bounding box detection is used as a proxy for its range.” (Para. [0087]). Foreign Publication ID GB 2612878 A published May 17, 2023 teaches the steps of obtaining at least two light detection and ranging (LiDAR) point clouds 602, processing the at least two LiDAR point clouds using at least one classifier network 604, obtaining at least one output dataset from the at least one classifier network 606, determining that the at least two LiDAR point clouds are misaligned based on the at least one output dataset 608, and performing a first action based on determining that the at least two LiDAR point clouds are misaligned 610. The at least one classifier network may comprise at least one of a pillar-based network (see figure 5b) or a kernel point convolution-based network (see figure 5c). The first action may comprise labelling the at least two LiDAR point clouds as misaligned, and/or updating a locality of a map based on labelling the at least two LiDAR point clouds as misaligned. Silberman et al. (U.S. Patent No. 11,556,638) teaches generating event-specific handling instructions for accelerating a threat mitigation of a cybersecurity event includes identifying a cybersecurity event; generating a cybersecurity event digest based on the cybersecurity event, computing a cybersecurity hashing-based signature of the cybersecurity event based on the cybersecurity event digest; searching, based on the distinct cybersecurity hashing-based signature of the cybersecurity event, an n-dimensional space comprising a plurality of historical cybersecurity event hashing-based signatures; returning one or more historical cybersecurity events or historical cybersecurity alerts homogeneous to the cybersecurity event based on the search; deriving one or more cybersecurity event-specific handling actions for the cybersecurity event based on identifying a threat handling action corresponding to each of the one or more historical cybersecurity events or historical cybersecurity alerts homogeneous to the cybersecurity event; and executing one or more cybersecurity threat mitigation actions to resolve or mitigate the cybersecurity event.The reference further teaches “assessing the hashing-based signature of the target cybersecurity event against each of a subset of the plurality of historical cybersecurity event hashing-based signatures having a same or similar number of tokens.” (Claim 3). Cheng et al. (U.S. Pre-Grant Publication No. 2014/0280082) teaches “The received 402 query may then be tokenized 404, which may include generating a vector or array of tokens in the query. The method by which tokens are identified may be as described above with respect to the tokenization module 304. The product records 114 of the product database 112 may then be analyzed to identify 406 a plurality of product records 114 at least one of the tokens of the query. In some embodiments, only those product records 114 matching all or a threshold amount or percentage of the tokens are identified 406 as matching.” (Para. [0038]). Heisele et al. (U.S. Pre-Grant publication No. 2007/0179918) teaches object recognition techniques are disclosed that provide both accuracy and speed. One embodiment of the present invention is an identification system. The system is capable of locating objects in images by searching for local features of an object. The system can operate in real-time. The system is trained from a set of images of an object or objects. The system computes interest points in the training images, and then extracts local image features (tokens) around these interest points. The set of tokens from the training images is then used to build a hierarchical model structure. During identification/detection, the system, computes interest points from incoming target images. The system matches tokens around these interest points with the tokens in the hierarchical model. Each successfully matched image token votes for an object hypothesis at a certain scale, location, and orientation in the target image. Object hypotheses that receive insufficient votes are rejected.The reference further teaches “each successfully matched image token votes for an object hypothesis at a certain scale, location, and orientation in the target image. The hypothesis verification module 420 is programmed or otherwise configured to determine if a token match threshold is satisfied. In one particular embodiment, a valid hypothesis is obtained if a minimum number of tokens matched and voted for the same object hypothesis” (Para. [0058]). Meyer et al. (U.S. Pre-Grant Publication No. 2025/0085416) teaches computing a point cloud based upon data output by several radar sensors in a distributed radar system. The radar sensors generate tensors based upon echo signals detected by the radar sensors. Values are extracted from the tensors and a sequence of tokens is created, where the sequence of tokens includes the values extracted from the tensors. The sequence of tokens is provided as input to a transformer model, which outputs a point cloud based upon the sequence of tokens.The reference further teaches “The generated tensors may also include other values, for example, such as values for range-velocity bins, values for range-azimuth bins, values for range-cross-track angle bins, values for Doppler-velocity bins, etc. “ (Para. [0020]), “The radar system 118 and/or the radar sensors 120 may also be configured to compute measurements of other variables based upon the detected echo signals reflected from the environment, such as velocity, angular, and/or azimuth data.” (Para. [0029]), and “At 212, sequences of tokens are generated based upon the radar data contained in the tensor bins. As discussed above, the sequences of tokens may be indicative of values contained in each tensor bin of each radar tensor.” (Para. [0062]). THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT F MAY whose telephone number is (571)272-3195. The examiner can normally be reached Monday-Friday 9:30am to 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154 /ROBERT F MAY/Examiner, Art Unit 2154 3/11/2026
Read full office action

Prosecution Timeline

Oct 26, 2023
Application Filed
Sep 30, 2024
Non-Final Rejection — §101, §103
Dec 17, 2024
Examiner Interview Summary
Dec 17, 2024
Applicant Interview (Telephonic)
Jan 02, 2025
Response Filed
Apr 13, 2025
Final Rejection — §101, §103
Jun 17, 2025
Examiner Interview Summary
Jun 17, 2025
Applicant Interview (Telephonic)
Jul 18, 2025
Response after Non-Final Action
Aug 18, 2025
Request for Continued Examination
Aug 21, 2025
Response after Non-Final Action
Sep 23, 2025
Non-Final Rejection — §101, §103
Dec 19, 2025
Applicant Interview (Telephonic)
Dec 19, 2025
Examiner Interview Summary
Dec 29, 2025
Response Filed
Mar 12, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586145
METHOD AND APPARATUS FOR EDITING VIDEO IN ELECTRONIC DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12468740
CATEGORY RECOMMENDATION WITH IMPLICIT ITEM FEEDBACK
2y 5m to grant Granted Nov 11, 2025
Patent 12367197
Pipelining a binary search algorithm of a sorted table
2y 5m to grant Granted Jul 22, 2025
Patent 12360955
Data Compression and Decompression Facilitated By Machine Learning
2y 5m to grant Granted Jul 15, 2025
Patent 12347550
IMAGING DISCOVERY UTILITY FOR AUGMENTING CLINICAL IMAGE MANAGEMENT
2y 5m to grant Granted Jul 01, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+29.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 286 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month