Prosecution Insights
Last updated: April 19, 2026
Application No. 18/717,934

SYSTEMS AND TECHNIQUES FOR DYNAMIC MOBILE MANAGEMENT USING COMPUTER VISION AND NAVIGATION SENSORS

Final Rejection §103§112
Filed
Jun 07, 2024
Examiner
MOORE, DUANE NEIL
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Pied Parker Inc.
OA Round
2 (Final)
26%
Grant Probability
At Risk
3-4
OA Rounds
2y 11m
To Grant
42%
With Interview

Examiner Intelligence

Grants only 26% of cases
26%
Career Allow Rate
25 granted / 96 resolved
-26.0% vs TC avg
Strong +16% interview lift
Without
With
+15.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
23 currently pending
Career history
119
Total Applications
across all art units

Statute-Specific Performance

§101
38.7%
-1.3% vs TC avg
§103
34.8%
-5.2% vs TC avg
§102
9.3%
-30.7% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 96 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The amendment filed 11/11/2025 has been entered. Claims 1-20 remain pending in the application. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 6, 13, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claims 6, 13, and 20 each recite “transmit[ting], using the communication system, via LoRa.” It is not clear what the communication is transmitting. Furthermore, neither the claims nor specification provide a definition for “LoRa.” For purposes of examination, the Examiner is interpreting claims 6, 13, and 20 as requiring a process of transmitting an item using the communication system. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. I. Claims 1-3, 5-6, 8-10, 12-13, 15-17, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lablans US 20170323458 in view of D'Angio US 20210005332 and Rosas-Maxemin US 20200211071. Regarding Claim 1, Lablans teaches a system for identifying and locating one or more shipping containers or one or more open parking spots in a container yard, the system comprising: at least one memory; and at least one processor coupled to the at least one memory, the at least one processor configured to ([0277] computing device including a) a programmed processor with memory configured to perform steps of the present invention): calculate a location of a locator device ([0112] For instance a user of a camera can walk around an environment and identify and store coordinates of locations on the camera. This allows the user of the camera to recall the location from the memory, determine a current location of the camera and have the processor based on the recalled location and current location calculate the required pose of the camera to be pointed at the recalled location from the current location; [0205] A camera may also include a loudspeaker and a microphone and a distance ranging device [locator device]); and calculate a location of the one or more shipping containers or one or more open parking spots based on the location of locator device ([0071] allows the processor to calculate a relationship between a location of an object in the image plane relative to its actual size and distance from the camera; [0267] a smartphone or computing device determines, based on a distance between itself and an object and a pose that places the object in a field of view of a camera of the device, a “location volume” with boundaries, the boundaries being defined by positional coordinates, which may be GPS coordinates or indoor coordinates; Claim 1 Apparatus to determine position coordinates including an altitude of an object, comprising: a mobile, portable and wireless computing device including a camera and positional sensors to determine positional data of the camera, including an altitude and a directional pose and at least one range-finding device [locator device] to determine a distance from the mobile, portable and wireless computing device to the object, the range-finding device being selected from the group consisting of laser based range finder, image based range finder, radar based range-finder and sound based range finder). Although Lablans teaches calculate a first location of a locator device, Lablans does not explicitly teach, however D’Angio teaches that the location of a locator device can be calculated using one or more position, navigation, and timing (PNT) sensors comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver ([0002] sensor data may include, but is not limited to Position, Navigation, and Timing (PNT) data from PNT sensors; [0044] the platform 150 may have a GPS GNSS sensor, a Galileo GNSS sensor, and an IMU sensor. The resilience engine 128 may detect a jamming threat on the GPS GNSS sensor, reducing its calculated sensor trust metric down to 10% and effectively ignoring sensor data from the GPS GNSS sensor altogether. The fusion module 138 works to fuse location data from the GPS GNSS sensor, Galileo GNSS sensor, and the IMU sensor. When the GPS GNSS sensor effectively goes offline from jamming, a fused location can still be calculated from the Galileo GNSS and IMU sensors, providing location sensing that is robust to threats). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the sensors in Lablan with the sensors in D’Angio. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a process of calculating, using PNT sensor(s) comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver, a location of a locator device. Lablans does not explicitly teach, however Rosas-Maxemin teaches scan, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determine, using the ML model, the environmental features associated with the lot; calculate a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot ([0016] receiving image data captured using one or more cameras, the image data including portions of one or more parking spaces; determining one or more parking space parameters associated with respective one or more parking spaces based on machine learning identification of aspects of the image data, the one or more parking space parameters including listing location availability and location data; determining one or more parking spaces correspond with one or more parking spaces stored in a parking space database based at least in part on comparing the respective one or more parking space parameters; receiving a search request from a first wireless device; selecting a first parking space for a first vehicle based on the one or more parking space parameters and matching preferences included in the search request; and transmitting instructions to the first wireless device to direct the first vehicle autonomously to the first parking space based on the selection, the instructions including a confirmation identifier, which, when transmitted by the first wireless device via short range wireless communication, causes a secured area to become unlocked; [0035] Once it is determined that a vehicle is looking for parking, image data gathered from multiple sources may be used to identify available parking spaces. The image data may be captured from cameras on other vehicles, cameras in a parking facility or along a street, and other parking sensors. The image data may be processed using machine learning algorithms, which may be trained to detect particular parking features, such as the presence or absence of a vehicle. Based on a recognition of the image and location information, the particular parking space in the image may be determined, for instance, by comparing information about the parking space with parking spaces in databases of parking spaces; [0064] at the entrance/exit to the parking lot there are sensors, such as cameras, that may identify a vehicle and associate the vehicle with a payment profile; [0074] parking space availability may be determined by using data from one or more sources, such as parking sensors in the lot such as wireless communication device 302, camera 310, and cameras on vehicles that have entered geographic area 304, such as vehicles 350-355; [0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the parking location method of Lablans the process of scanning, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determining, using the ML model, the environmental features associated with the lot; and calculating a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot as taught by Rosas-Maxemin since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination is predictable. Such a combination would yield the predictable result of a parking location method where a camera and remote sensors including at least one LiDAR sensor, scans environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a ML model and a computer vision algorithm; the environmental features associated with the lot are determined using the ML model; and a location of the one or more shipping containers or one or more open parking spots is calculated based on the determination of the environmental features associated with the lot. Lablans does not explicitly teach, however Rosas-Maxemin teaches generate a mapping of the lot based on the calculated location of the one or more shipping containers or one or more open parking spots ([0072] the virtual boundary data is enhanced with navigational mappings, such as the location of the entrance and exit on the boundary, the location of the parking spaces in relation to the boundary, the indoor routing within the geographic area, such as the indicated direction vehicles are meant to travel or detours, and/or the like. [0117] During a process 710, initial parking lot conditions are determined and/or inputted based on known listing location parameters. In some examples, the parking lot is empty, and all that is initialized is the area of the parking lot. In some examples, there are one or more vehicles parked in the parking lot. In some examples, specific contours of a parking lot, e.g., an indoor garage, are mapped out using external sensors of vehicles and/or cameras within a parking lot. In some examples, sensors may includes sensors described above with respect to FIG. 3. In some aspects, a vehicle enters geographic area, such as geographic area 602, and vehicles 610-618, 620-625, 626-638, 640-649, 650-661, and 662-665 are parked as illustrated in FIG. 6) (see rejection above for combination rationale). Lablans does not explicitly teach, however Rosas-Maxemin teaches transmit, using a communication system, the mapping data to a remote location ([0080] The data sources may include image data sources, such as images sent from cameras on one or more vehicles, cameras placed in a vantage point, and/or the like. In some examples, external vehicle sensor data from multiple vehicles may be used to identify unoccupied listing locations in real time) (see rejection above for combination rationale). Regarding Claim 2, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 1, as discussed above. Lablans does not explicitly teach, however Rosas-Maxemin teaches wherein the ML models are trained to detect particular features of the surrounding environment ([0016] receiving image data captured using one or more cameras, the image data including portions of one or more parking spaces; determining one or more parking space parameters associated with respective one or more parking spaces based on machine learning identification of aspects of the image data, the one or more parking space parameters including listing location availability and location data; determining one or more parking spaces correspond with one or more parking spaces stored in a parking space database based at least in part on comparing the respective one or more parking space parameters; receiving a search request from a first wireless device; selecting a first parking space for a first vehicle based on the one or more parking space parameters and matching preferences included in the search request; and transmitting instructions to the first wireless device to direct the first vehicle autonomously to the first parking space based on the selection, the instructions including a confirmation identifier, which, when transmitted by the first wireless device via short range wireless communication, causes a secured area to become unlocked; [0035] Once it is determined that a vehicle is looking for parking, image data gathered from multiple sources may be used to identify available parking spaces. The image data may be captured from cameras on other vehicles, cameras in a parking facility or along a street, and other parking sensors. The image data may be processed using machine learning algorithms, which may be trained to detect particular parking features, such as the presence or absence of a vehicle. Based on a recognition of the image and location information, the particular parking space in the image may be determined, for instance, by comparing information about the parking space with parking spaces in databases of parking spaces; [0064] at the entrance/exit to the parking lot there are sensors, such as cameras, that may identify a vehicle and associate the vehicle with a payment profile; [0074] parking space availability may be determined by using data from one or more sources, such as parking sensors in the lot such as wireless communication device 302, camera 310, and cameras on vehicles that have entered geographic area 304, such as vehicles 350-355; [0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.) (see claim 1 rejection above for combination rationale). Regarding Claim 3, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 1, as discussed above. Lablans does not explicitly teach, however Rosas-Maxemin teaches wherein the ML models include one or more of a deep neural network, a Bayesian network, a MobileNet object detector, and a Stochastic Gradient Descent (SGD) classifier ([0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.) (see claim 1 rejection above for combination rationale). Regarding Claim 5, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 1, as discussed above. Lablans further teaches wherein the locator device is attached to a drone ([0275] In one embodiment of the present invention the camera is not earth bound as being attached to an aircraft including an airplane, a helicopter or a drone or a person flying in an aircraft; [0205] A camera may also include a loudspeaker and a microphone and a distance ranging device). Regarding Claim 6, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 1, as discussed above. Lablans further teaches wherein the at least one processor is further configured to: transmit, using the communication system, via LoRa ([0209] A user may decide to allow 4502 sending the geospatial coordinates determined by 4503 to the outside world. A user may also instruct the device not to transmit that information or only selectively to pre-selected targets or in response to pre-authorized requesters which may be stored in the memory 4504. The device inside 4500 may be controlled wirelessly through another uniquely authorized device like a smart phone or the headframe system as disclosed herein; [0212] The server 4607 in particular is enabled to receive geospatial end identifying data from 4500 and transmit it to any of the other devices; [0214] FIG. 470 illustrates this “where are you?” method provided in accordance with one or more aspects of the present invention. A camera, for instance operated by a user, but it also may be a platform camera, itself or a connected computing device such as a smart phone puts out a request for the location of an object ‘obj’. Such a request may for instance be transmitted over a network to a server. The server matches the name ‘obj’ with an ID of a corresponding device as provided in FIG. 47 and may send a request for geospatial coordinates to that device. In one embodiment the device automatically updates its position to the server. The server checks an authorization to provide that information to the requesting party). Regarding Claim 8, Lablans teaches a method comprising: calculating a location of a locator device ([0112] For instance a user of a camera can walk around an environment and identify and store coordinates of locations on the camera. This allows the user of the camera to recall the location from the memory, determine a current location of the camera and have the processor based on the recalled location and current location calculate the required pose of the camera to be pointed at the recalled location from the current location; [0205] A camera may also include a loudspeaker and a microphone and a distance ranging device [locator device]); and calculate a location of the one or more shipping containers or one or more open parking spots based on the location of locator device ([0071] allows the processor to calculate a relationship between a location of an object in the image plane relative to its actual size and distance from the camera; [0267] a smartphone or computing device determines, based on a distance between itself and an object and a pose that places the object in a field of view of a camera of the device, a “location volume” with boundaries, the boundaries being defined by positional coordinates, which may be GPS coordinates or indoor coordinates; Claim 1 Apparatus to determine position coordinates including an altitude of an object, comprising: a mobile, portable and wireless computing device including a camera and positional sensors to determine positional data of the camera, including an altitude and a directional pose and at least one range-finding device [locator device] to determine a distance from the mobile, portable and wireless computing device to the object, the range-finding device being selected from the group consisting of laser based range finder, image based range finder, radar based range-finder and sound based range finder). Although Lablans teaches calculate a first location of a locator device, Lablans does not explicitly teach, however D’Angio teaches that the location of a locator device can be calculated using one or more position, navigation, and timing (PNT) sensors comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver ([0002] sensor data may include, but is not limited to Position, Navigation, and Timing (PNT) data from PNT sensors; [0044] the platform 150 may have a GPS GNSS sensor, a Galileo GNSS sensor, and an IMU sensor. The resilience engine 128 may detect a jamming threat on the GPS GNSS sensor, reducing its calculated sensor trust metric down to 10% and effectively ignoring sensor data from the GPS GNSS sensor altogether. The fusion module 138 works to fuse location data from the GPS GNSS sensor, Galileo GNSS sensor, and the IMU sensor. When the GPS GNSS sensor effectively goes offline from jamming, a fused location can still be calculated from the Galileo GNSS and IMU sensors, providing location sensing that is robust to threats). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the sensors in Lablan with the sensors in D’Angio. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a process of calculating, using PNT sensor(s) comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver, a location of a locator device. Lablans does not explicitly teach, however Rosas-Maxemin teaches scanning, using a camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determine, using the ML model, the environmental features associated with the lot; calculate a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot ([0016] receiving image data captured using one or more cameras, the image data including portions of one or more parking spaces; determining one or more parking space parameters associated with respective one or more parking spaces based on machine learning identification of aspects of the image data, the one or more parking space parameters including listing location availability and location data; determining one or more parking spaces correspond with one or more parking spaces stored in a parking space database based at least in part on comparing the respective one or more parking space parameters; receiving a search request from a first wireless device; selecting a first parking space for a first vehicle based on the one or more parking space parameters and matching preferences included in the search request; and transmitting instructions to the first wireless device to direct the first vehicle autonomously to the first parking space based on the selection, the instructions including a confirmation identifier, which, when transmitted by the first wireless device via short range wireless communication, causes a secured area to become unlocked; [0035] Once it is determined that a vehicle is looking for parking, image data gathered from multiple sources may be used to identify available parking spaces. The image data may be captured from cameras on other vehicles, cameras in a parking facility or along a street, and other parking sensors. The image data may be processed using machine learning algorithms, which may be trained to detect particular parking features, such as the presence or absence of a vehicle. Based on a recognition of the image and location information, the particular parking space in the image may be determined, for instance, by comparing information about the parking space with parking spaces in databases of parking spaces; [0064] at the entrance/exit to the parking lot there are sensors, such as cameras, that may identify a vehicle and associate the vehicle with a payment profile; [0074] parking space availability may be determined by using data from one or more sources, such as parking sensors in the lot such as wireless communication device 302, camera 310, and cameras on vehicles that have entered geographic area 304, such as vehicles 350-355; [0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the parking location method of Lablans the process of scanning, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determining, using the ML model, the environmental features associated with the lot; and calculating a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot as taught by Rosas-Maxemin since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination is predictable. Such a combination would yield the predictable result of a parking location method where a camera and remote sensors including at least one LiDAR sensor, scans environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a ML model and a computer vision algorithm; the environmental features associated with the lot are determined using the ML model; and a location of the one or more shipping containers or one or more open parking spots is calculated based on the determination of the environmental features associated with the lot. Lablans does not explicitly teach, however Rosas-Maxemin teaches generate a mapping of the lot based on the calculated location of the one or more shipping containers or one or more open parking spots ([0072] the virtual boundary data is enhanced with navigational mappings, such as the location of the entrance and exit on the boundary, the location of the parking spaces in relation to the boundary, the indoor routing within the geographic area, such as the indicated direction vehicles are meant to travel or detours, and/or the like. [0117] During a process 710, initial parking lot conditions are determined and/or inputted based on known listing location parameters. In some examples, the parking lot is empty, and all that is initialized is the area of the parking lot. In some examples, there are one or more vehicles parked in the parking lot. In some examples, specific contours of a parking lot, e.g., an indoor garage, are mapped out using external sensors of vehicles and/or cameras within a parking lot. In some examples, sensors may includes sensors described above with respect to FIG. 3. In some aspects, a vehicle enters geographic area, such as geographic area 602, and vehicles 610-618, 620-625, 626-638, 640-649, 650-661, and 662-665 are parked as illustrated in FIG. 6) (see rejection above for combination rationale). Lablans does not explicitly teach, however Rosas-Maxemin teaches transmit, using a communication system, the mapping data to a remote location ([0080] The data sources may include image data sources, such as images sent from cameras on one or more vehicles, cameras placed in a vantage point, and/or the like. In some examples, external vehicle sensor data from multiple vehicles may be used to identify unoccupied listing locations in real time) (see rejection above for combination rationale). Regarding Claim 9, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 8, as discussed above. Lablans does not explicitly teach, however Rosas-Maxemin teaches wherein the ML models are trained to detect particular features of the surrounding environment ([0016] receiving image data captured using one or more cameras, the image data including portions of one or more parking spaces; determining one or more parking space parameters associated with respective one or more parking spaces based on machine learning identification of aspects of the image data, the one or more parking space parameters including listing location availability and location data; determining one or more parking spaces correspond with one or more parking spaces stored in a parking space database based at least in part on comparing the respective one or more parking space parameters; receiving a search request from a first wireless device; selecting a first parking space for a first vehicle based on the one or more parking space parameters and matching preferences included in the search request; and transmitting instructions to the first wireless device to direct the first vehicle autonomously to the first parking space based on the selection, the instructions including a confirmation identifier, which, when transmitted by the first wireless device via short range wireless communication, causes a secured area to become unlocked; [0035] Once it is determined that a vehicle is looking for parking, image data gathered from multiple sources may be used to identify available parking spaces. The image data may be captured from cameras on other vehicles, cameras in a parking facility or along a street, and other parking sensors. The image data may be processed using machine learning algorithms, which may be trained to detect particular parking features, such as the presence or absence of a vehicle. Based on a recognition of the image and location information, the particular parking space in the image may be determined, for instance, by comparing information about the parking space with parking spaces in databases of parking spaces; [0064] at the entrance/exit to the parking lot there are sensors, such as cameras, that may identify a vehicle and associate the vehicle with a payment profile; [0074] parking space availability may be determined by using data from one or more sources, such as parking sensors in the lot such as wireless communication device 302, camera 310, and cameras on vehicles that have entered geographic area 304, such as vehicles 350-355; [0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.) (see claim 8 rejection above for combination rationale). Regarding Claim 10, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 8, as discussed above. Lablans does not explicitly teach, however Rosas-Maxemin teaches wherein the ML models include one or more of a deep neural network, a Bayesian network, a MobileNet object detector, and a Stochastic Gradient Descent (SGD) classifier ([0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.) (see claim 8 rejection above for combination rationale). Regarding Claim 12, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 8, as discussed above. Lablans further teaches wherein the locator device is attached to a drone ([0275] In one embodiment of the present invention the camera is not earth bound as being attached to an aircraft including an airplane, a helicopter or a drone or a person flying in an aircraft; [0205] A camera may also include a loudspeaker and a microphone and a distance ranging device). Regarding Claim 13, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 8, as discussed above. Lablans further teaches further comprising: transmitting, using the communication system, via LoRa ([0209] A user may decide to allow 4502 sending the geospatial coordinates determined by 4503 to the outside world. A user may also instruct the device not to transmit that information or only selectively to pre-selected targets or in response to pre-authorized requesters which may be stored in the memory 4504. The device inside 4500 may be controlled wirelessly through another uniquely authorized device like a smart phone or the headframe system as disclosed herein; [0212] The server 4607 in particular is enabled to receive geospatial end identifying data from 4500 and transmit it to any of the other devices; [0214] FIG. 47 illustrates this “where are you?” method provided in accordance with one or more aspects of the present invention. A camera, for instance operated by a user, but it also may be a platform camera, itself or a connected computing device such as a smart phone puts out a request for the location of an object ‘obj’. Such a request may for instance be transmitted over a network to a server. The server matches the name ‘obj’ with an ID of a corresponding device as provided in FIG. 47 and may send a request for geospatial coordinates to that device. In one embodiment the device automatically updates its position to the server. The server checks an authorization to provide that information to the requesting party). Regarding Claim 15, Lablans teaches a non-transitory computer-readable storage medium comprising at least one instruction for causing a computer or processor to ([0277] computing device including a) a programmed processor with memory configured to perform steps of the present invention): calculate a location of a locator device ([0112] For instance a user of a camera can walk around an environment and identify and store coordinates of locations on the camera. This allows the user of the camera to recall the location from the memory, determine a current location of the camera and have the processor based on the recalled location and current location calculate the required pose of the camera to be pointed at the recalled location from the current location; [0205] A camera may also include a loudspeaker and a microphone and a distance ranging device [locator device]); and Although Lablans teaches calculate a first location of a locator device, Lablans does not explicitly teach, however D’Angio teaches that the location of a locator device can be calculated using one or more position, navigation, and timing (PNT) sensors comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver ([0002] sensor data may include, but is not limited to Position, Navigation, and Timing (PNT) data from PNT sensors; [0044] the platform 150 may have a GPS GNSS sensor, a Galileo GNSS sensor, and an IMU sensor. The resilience engine 128 may detect a jamming threat on the GPS GNSS sensor, reducing its calculated sensor trust metric down to 10% and effectively ignoring sensor data from the GPS GNSS sensor altogether. The fusion module 138 works to fuse location data from the GPS GNSS sensor, Galileo GNSS sensor, and the IMU sensor. When the GPS GNSS sensor effectively goes offline from jamming, a fused location can still be calculated from the Galileo GNSS and IMU sensors, providing location sensing that is robust to threats). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the sensors in Lablan with the sensors in D’Angio. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a process of calculating, using PNT sensor(s) comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver, a location of a locator device. Lablans does not explicitly teach, however Rosas-Maxemin teaches scan, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determine, using the ML model, the environmental features associated with the lot; calculate a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot ([0016] receiving image data captured using one or more cameras, the image data including portions of one or more parking spaces; determining one or more parking space parameters associated with respective one or more parking spaces based on machine learning identification of aspects of the image data, the one or more parking space parameters including listing location availability and location data; determining one or more parking spaces correspond with one or more parking spaces stored in a parking space database based at least in part on comparing the respective one or more parking space parameters; receiving a search request from a first wireless device; selecting a first parking space for a first vehicle based on the one or more parking space parameters and matching preferences included in the search request; and transmitting instructions to the first wireless device to direct the first vehicle autonomously to the first parking space based on the selection, the instructions including a confirmation identifier, which, when transmitted by the first wireless device via short range wireless communication, causes a secured area to become unlocked; [0035] Once it is determined that a vehicle is looking for parking, image data gathered from multiple sources may be used to identify available parking spaces. The image data may be captured from cameras on other vehicles, cameras in a parking facility or along a street, and other parking sensors. The image data may be processed using machine learning algorithms, which may be trained to detect particular parking features, such as the presence or absence of a vehicle. Based on a recognition of the image and location information, the particular parking space in the image may be determined, for instance, by comparing information about the parking space with parking spaces in databases of parking spaces; [0064] at the entrance/exit to the parking lot there are sensors, such as cameras, that may identify a vehicle and associate the vehicle with a payment profile; [0074] parking space availability may be determined by using data from one or more sources, such as parking sensors in the lot such as wireless communication device 302, camera 310, and cameras on vehicles that have entered geographic area 304, such as vehicles 350-355; [0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the parking location method of Lablans the process of scanning, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determining, using the ML model, the environmental features associated with the lot; and calculating a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot as taught by Rosas-Maxemin since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination is predictable. Such a combination would yield the predictable result of a parking location method where a camera and remote sensors including at least one LiDAR sensor, scans environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a ML model and a computer vision algorithm; the environmental features associated with the lot are determined using the ML model; and a location of the one or more shipping containers or one or more open parking spots is calculated based on the determination of the environmental features associated with the lot. Lablans does not explicitly teach, however Rosas-Maxemin teaches generate a mapping of the lot based on the calculated location of the one or more shipping containers or one or more open parking spots ([0072] the virtual boundary data is enhanced with navigational mappings, such as the location of the entrance and exit on the boundary, the location of the parking spaces in relation to the boundary, the indoor routing within the geographic area, such as the indicated direction vehicles are meant to travel or detours, and/or the like. [0117] During a process 710, initial parking lot conditions are determined and/or inputted based on known listing location parameters. In some examples, the parking lot is empty, and all that is initialized is the area of the parking lot. In some examples, there are one or more vehicles parked in the parking lot. In some examples, specific contours of a parking lot, e.g., an indoor garage, are mapped out using external sensors of vehicles and/or cameras within a parking lot. In some examples, sensors may includes sensors described above with respect to FIG. 3. In some aspects, a vehicle enters geographic area, such as geographic area 602, and vehicles 610-618, 620-625, 626-638, 640-649, 650-661, and 662-665 are parked as illustrated in FIG. 6) (see rejection above for combination rationale). Lablans does not explicitly teach, however Rosas-Maxemin teaches transmit, using a communication system, the mapping data to a remote location ([0080] The data sources may include image data sources, such as images sent from cameras on one or more vehicles, cameras placed in a vantage point, and/or the like. In some examples, external vehicle sensor data from multiple vehicles may be used to identify unoccupied listing locations in real time) (see rejection above for combination rationale). Regarding Claim 16, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 15, as discussed above. Lablans does not explicitly teach, however Rosas-Maxemin teaches wherein the ML models are trained to detect particular features of the surrounding environment ([0016] receiving image data captured using one or more cameras, the image data including portions of one or more parking spaces; determining one or more parking space parameters associated with respective one or more parking spaces based on machine learning identification of aspects of the image data, the one or more parking space parameters including listing location availability and location data; determining one or more parking spaces correspond with one or more parking spaces stored in a parking space database based at least in part on comparing the respective one or more parking space parameters; receiving a search request from a first wireless device; selecting a first parking space for a first vehicle based on the one or more parking space parameters and matching preferences included in the search request; and transmitting instructions to the first wireless device to direct the first vehicle autonomously to the first parking space based on the selection, the instructions including a confirmation identifier, which, when transmitted by the first wireless device via short range wireless communication, causes a secured area to become unlocked; [0035] Once it is determined that a vehicle is looking for parking, image data gathered from multiple sources may be used to identify available parking spaces. The image data may be captured from cameras on other vehicles, cameras in a parking facility or along a street, and other parking sensors. The image data may be processed using machine learning algorithms, which may be trained to detect particular parking features, such as the presence or absence of a vehicle. Based on a recognition of the image and location information, the particular parking space in the image may be determined, for instance, by comparing information about the parking space with parking spaces in databases of parking spaces; [0064] at the entrance/exit to the parking lot there are sensors, such as cameras, that may identify a vehicle and associate the vehicle with a payment profile; [0074] parking space availability may be determined by using data from one or more sources, such as parking sensors in the lot such as wireless communication device 302, camera 310, and cameras on vehicles that have entered geographic area 304, such as vehicles 350-355; [0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.) (see claim 15 rejection above for combination rationale). Regarding Claim 17, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 15, as discussed above. Lablans does not explicitly teach, however Rosas-Maxemin teaches wherein the ML models include one or more of a deep neural network, a Bayesian network, a MobileNet object detector, and a Stochastic Gradient Descent (SGD) classifier ([0081] sensor data may be used to determine occupied versus unoccupied listing locations. In some examples, a machine learning (ML) model may be trained/tuned based on training data collected from positive recognition, false recognition, and/or other criteria. In some aspects, the ML model may be a deep neural network, Bayesian network, and/or the like and/or combinations thereof. Although various types of ML models may be deployed to refine some aspects for identifying whether a listing location is occupied or not, in some aspects, one or more ML-based classification algorithms may be used. Such classifiers may include but are not limited to: MobileNet object detector, a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, and/or the like. Additionally, the ML models may be configured to perform various types of regression, for example, using one or more various regression algorithms, including but not limited to: Stochastic Gradient Descent Regression, and/or Passive Aggressive Regression, etc.) (see claim 15 rejection above for combination rationale). Regarding Claim 19, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 15, as discussed above. Lablans further teaches wherein the locator device is attached to a drone ([0275] In one embodiment of the present invention the camera is not earth bound as being attached to an aircraft including an airplane, a helicopter or a drone or a person flying in an aircraft; [0205] A camera may also include a loudspeaker and a microphone and a distance ranging device). Regarding Claim 20, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 15, as discussed above. Lablans further teaches wherein the at least one instruction is further configured to: transmit, using the communication system, via LoRa ([0209] A user may decide to allow 4502 sending the geospatial coordinates determined by 4503 to the outside world. A user may also instruct the device not to transmit that information or only selectively to pre-selected targets or in response to pre-authorized requesters which may be stored in the memory 4504. The device inside 4500 may be controlled wirelessly through another uniquely authorized device like a smart phone or the headframe system as disclosed herein; [0212] The server 4607 in particular is enabled to receive geospatial end identifying data from 4500 and transmit it to any of the other devices; [0214] FIG. 47 illustrates this “where are you?” method provided in accordance with one or more aspects of the present invention. A camera, for instance operated by a user, but it also may be a platform camera, itself or a connected computing device such as a smart phone puts out a request for the location of an object ‘obj’. Such a request may for instance be transmitted over a network to a server. The server matches the name ‘obj’ with an ID of a corresponding device as provided in FIG. 47 and may send a request for geospatial coordinates to that device. In one embodiment the device automatically updates its position to the server. The server checks an authorization to provide that information to the requesting party). II. Claims 7, 14, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Lablans in view of D'Angio, Rosas-Maxemin, and Halverson US 20180157255. Regarding Claim 7, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 1, as discussed above. Lablans does not explicitly teach, however Halverson teaches wherein the camera is mounted on an electronically controllable mechanical gimbal ([0344] a camera/sensor 87 can be mounted in a gimbal structure where the camera/sensor 87 can change its direction with a tilt and pan servo; [0429] the camera may be mounted on a gimbal structure allowing the UMV detection system to rotate the camera to scan the operational area). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the camera in Lablan with the camera in Halverson. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a camera mounted on an electronically controllable mechanical gimbal. Regarding Claim 14, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 8, as discussed above. Lablans does not explicitly teach, however Halverson teaches wherein the camera is mounted on an electronically controllable mechanical gimbal ([0344] a camera/sensor 87 can be mounted in a gimbal structure where the camera/sensor 87 can change its direction with a tilt and pan servo; [0429] the camera may be mounted on a gimbal structure allowing the UMV detection system to rotate the camera to scan the operational area). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the camera in Lablan with the camera in Halverson. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a camera mounted on an electronically controllable mechanical gimbal. III. Claims 4, 11, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Lablans in view of D'Angio, Rosas-Maxemin, and Miller US 20180334865. Regarding Claim 4, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 1, as discussed above. Lablans does not explicitly teach, however Miller teaches wherein the locator device is attached to a yard rig ([0049] The position sensors mounted to the rig-floor pipe lifting machine 10, or the link attached between the rig-floor pipe lifting machine 10 and the rig-floor 104 may be selected from the group consisting of optical sensors such as lidar, ultrasound sensors, radio frequency sensors, or other sensors known in the art). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the locator device in Lablan with the locator device in Miller. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a locator device attached to a yard rig. Regarding Claim 11, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 8, as discussed above. Lablans does not explicitly teach, however Miller teaches wherein the locator device is attached to a yard rig ([0049] The position sensors mounted to the rig-floor pipe lifting machine 10, or the link attached between the rig-floor pipe lifting machine 10 and the rig-floor 104 may be selected from the group consisting of optical sensors such as lidar, ultrasound sensors, radio frequency sensors, or other sensors known in the art). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the locator device in Lablan with the locator device in Miller. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a locator device attached to a yard rig. Regarding Claim 18, the combination of Lablans, D'Angio, and Rosas-Maxemin teaches the limitations of claim 15, as discussed above. Lablans does not explicitly teach, however Miller teaches wherein the locator device is attached to a yard rig ([0049] The position sensors mounted to the rig-floor pipe lifting machine 10, or the link attached between the rig-floor pipe lifting machine 10 and the rig-floor 104 may be selected from the group consisting of optical sensors such as lidar, ultrasound sensors, radio frequency sensors, or other sensors known in the art). It would have been obvious to person having ordinary skill in the art before the effective filing date of the claimed invention to substitute the locator device in Lablan with the locator device in Miller. See KSR International Co. v. Teleflex Inc. (KSR), 550 U.S. 398, 82 USPQ2d 1385 (2007) (simple substitution of one known element for another to obtain predictable results). Such a simple substitution would yield the predictable result of a locator device attached to a yard rig. Response to Arguments Applicant’s arguments regarding the 35 U.S.C. 101 rejections have been fully considered and are persuasive. The 35 U.S.C. 101 rejections have been withdrawn. Applicant's arguments regarding the prior art rejections have been fully considered but they are not persuasive. Applicant argues that: Neither Lablans nor D' Angio teaches: calculate, using one or more position, navigation, and timing (PNT) sensors comprising at least an inertial measurement unit (IIVIU) and a Global Navigation Satellite System (GNSS) receiver, a location of a locator device; scan, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determine, using the ML model, the environmental features associated with the lot; calculate a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot and location of the locator device; generate a mapping of the lot based on the calculated location of the one or more shipping containers or one or more open parking spots; and transmit, using a communication system, the mapping data to a remote location (pp. 12-13). As discussed more fully above, such features are taught by the combination of Lablans, D'Angio, and Rosas-Maxemin. Applicant argues that: Halverson is silent z the determination of the environmental features associated with the lot and location of the locator device; generate a mapping of the lot based on the calculated location of the one or more shipping containers or one or more open parking spots; and transmit, using a communication system, the mapping data to a remote location (pp. 13-14). As discussed more fully above, such features are taught by the combination of Lablans, D'Angio, Rosas-Maxemin, and Halverson. Applicant argues that: Miller is silent regarding: calculate, using one or more position, navigation, and timing (PNT) sensors comprising at least an inertial measurement unit (IMU) and a Global Navigation Satellite System (GNSS) receiver, a location of a locator device; scan, using the camera and remote sensors including at least one Detection and Ranging (LiDAR) sensor, environmental features associated with a lot, the features including at least features of a shipping container or parking spot using a machine learning (ML) model and a computer vision algorithm; determine, using the ML model. the environmental features associated with the lot; calculate a location of the one or more shipping containers or one or more open parking spots based on the determination of the environmental features associated with the lot and location of the locator device; generate a mapping of the lot based on the calculated location of the one or more shipping containers or one or more open parking spots; and transmit, using a communication system, the mapping data to a remote location (pp. 14-15). As discussed more fully above, such features are taught by the combination of Lablans, D'Angio, Rosas-Maxemin, and Miller. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUANE MOORE whose telephone number is (571)272-7544. The examiner can normally be reached on Mon-Fri 9:00-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY ZIMMERMAN can be reached on (571)272-4602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /D.N.M./Examiner, Art Unit 3628 /GEORGE CHEN/Primary Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Jun 07, 2024
Application Filed
May 09, 2025
Non-Final Rejection — §103, §112
Oct 09, 2025
Applicant Interview (Telephonic)
Oct 09, 2025
Examiner Interview Summary
Nov 11, 2025
Response Filed
Jan 24, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12518563
DELIVERY DRONE AND DELIVERY METHOD
2y 5m to grant Granted Jan 06, 2026
Patent 12468988
ELECTRONIC LEDGER TICKETING SYSTEMS & PLATFORMS
2y 5m to grant Granted Nov 11, 2025
Patent 12373765
SYSTEM AND METHOD FOR PROVIDING UNIFORM TRACKING INFORMATION WITH A RELIABLE ESTIMATED TIME OF ARRIVAL
2y 5m to grant Granted Jul 29, 2025
Patent 12361340
PARKING AND CHARGING MARKETPLACE AND RESERVATION SYSTEM
2y 5m to grant Granted Jul 15, 2025
Patent 12299607
EFFICIENT PARAMETER SELECTION FOR CLIENT-CREATED PRIVATE JET SEGMENTS
2y 5m to grant Granted May 13, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
26%
Grant Probability
42%
With Interview (+15.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 96 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month