Prosecution Insights
Last updated: April 18, 2026
Application No. 18/734,898

Systems and Methods for Automating Access

Final Rejection §103§112
Filed
Jun 05, 2024
Examiner
DEL TORO-ORTEGA, JORGE G
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Eaigle Inc.
OA Round
4 (Final)
18%
Grant Probability
At Risk
5-6
OA Rounds
2y 7m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
24 granted / 136 resolved
-34.4% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
24 currently pending
Career history
160
Total Applications
across all art units

Statute-Specific Performance

§101
38.3%
-1.7% vs TC avg
§103
38.8%
-1.2% vs TC avg
§102
7.9%
-32.1% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§103 §112
Status of Claims This action is in reply to the communications filed on 03/10/2026. Claims 1, 5-6, 9-10, 12, 16-17, and 20-22 have been amended. Claims 1, 3-12, and 14-22 are currently pending and have been examined. Response to Applicant’s Remarks Applicant’s arguments and remarks filed on 03/10/2026 have been fully considered and each argument will be respectfully addressed in the following final office action. Response to 35 U.S.C. § 112 Remarks Applicant’s remarks filed on page 9 of the Response concerning the 35 U.S.C. § 112(b) rejection of the claims have been fully considered and are considered persuasive. In view of the amendments to the independent claims, the §112(b) rejections have been overcome and withdrawn herein accordingly. Response to 35 U.S.C. § 103 Remarks Applicant’s remarks filed on pages 9-14 of the Response concerning the 35 U.S.C. § 103 rejection of the pending claims have been fully considered but are moot in view of the amended rejection that may be found starting on page 7 of this final office action. On pages 9-14 of the Response, the Applicant argues that the prior art of record, namely Prasad and Kim, does not teach the features of the amended claim. On page 13 of the Response, the Applicant argues “Kim does not teach, suggest or consider using region or lane information, let alone using machine learning as recited in the present claims. As such, the claimed visual path guidance is different from the general directions (e.g., left turn arrow – See Fig. 5) suggested by Kim”. The Examiner respectfully disagrees that Kim does not teach the amended features of the independent claims. As disclosed by the Applicant in the specification, “the automation system 202 generates a guidance data file that includes path guidance from the gate to the assigned destination […] the guidance data file can include visual elements (e.g., arrows, boxes, etc.) overlaid on top of images from the imaging devices 102 showing the current state of the pathway…the guidance data file can generate visual elements overlayed on top of a map” (¶ [0105]), and “region can optionally include one or more designated pathways 116 or more generally routes 110 for moving people, goods, or materials” (¶ [0050]). Thus, under broadest reasonable interpretation in view of the specification, the claimed “visual path guidance comprising region and/or lane assignments” encompass visual elements (e.g. arrows) that are overlaid on top of images of a pathway or map, and the region/lane assignments encompass designated pathways or general routes. Kim teaches a system which utilizes artificial intelligence to generate guide information for a vehicle to navigate to reserved parking spot. First, the system processor may receive identification information of an entering vehicle from a first camera installed at an entrance of a parking lot and check the reservation information for a reservation time and a reserved parking spot in response to the identification information (see abstract). Subsequently, the system may generate route information based on the reserved parking spot, generate guide information based on the location of the vehicle and the route information, and provide the guide information to a user terminal operating the vehicle (see abstract). In particular, the system (using artificial intelligence) generates an augmented reality image corresponding to the position of the vehicle, the position of a third camera, and the angle of the third camera based on the route information, and superimposes the augmented reality image on the vehicle image to provide the guide information (see ¶ [0010], ¶ [0097]). Moreover, Fig. 5 depicts the guidance information provided to the user terminal, wherein the rendered guidance information displays an arrow indicating the lane(s) to traverse by the vehicle in order to reach the parking spot (see ¶ [0062]); equivalent to determining a destination of the vehicle based on the validated check-in event; and generating a guidance data file that comprises visual path guidance from a location of the vehicle to the determined destination, the visual path guidance comprising region and/or lane assignments determined using machine learning in a 3D domain, and providing a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate, the visual rendering to be displayed by an electronic device associated with the vehicle. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1, 3-12, and 14-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Independent claims 1, 12, and 20, and their dependents recite “the visual path guidance comprising a region and/or lane assignments determined using machine learning in a 3D domain” and “providing a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate”. On page 9 of the Applicant’s remarks submitted 03/10/2026, the Applicant notes “[s]upport for these amendments can be found throughout the application as filed, for example, paragraphs [0041]-[0042] and [0044]-[0050] and FIGS. 1 and 2A of the application as filed”. These paragraphs and figures, however, fail to disclose or suggest the features for determining regional and/or lane assignments using machine learning in a 3D domain and providing a world view visual rendering from the 3D domain that specifies the region and/or lane assignments associated with entry or egress through the gate. The specification further discloses “The AI module 210 can encompass one or more of any one of a neural network-based object detection, tracking, instance segmentation, classification, full 360 reconstruction, and damage verification through 3D matching model […] The AI module 210 can generate a 360 full view vehicle reconstruction utilizing a combination of imaging devices…The AI module 210 can create a comprehensive 360-degree view of the vehicle’s surroundings” (¶ [0057]), “Imaging from the imaging device(s) 102 can be used to generate a three-dimensional reconstruction of the vehicle at the gate“ (¶ [0073]); “3D reconstructions from the ingress gate and egress gate can be compared with an aspect of the AI module 210. Block 410 can involve determining any damage to the vehicle 112 between two reconstructions. For example, portions of the reconstruction that show damage in the later image can be saved, along with the earlier reconstruction, to document the damage” (¶ [0092]). This disclosure, at most, suggests that the AI module creates a three-dimensional reconstruction of the vehicle, and the 3D/360-degree reconstruction of the vehicle is utilized to assess and document any damages to the vehicle. The specification does not indicate anywhere that the AI module utilizes a 3D reconstruction of the vehicle in the process of determining visual path guidance information comprising a region and/or lane assignments, or that the region and/or lane assignments are generated in a 3D domain. Furthermore, the specification discloses “the automation system 202 generates a guidance data file that includes path guidance from the gate to the assigned destination […] the guidance data file can include visual elements (e.g., arrows, boxes, etc.) overlaid on top of images from the imaging devices 102 showing the current state of the pathway…the guidance data file can generate visual elements overlayed on top of a map” (¶ [0105]), and “the guidance file can be overlayed on top of a map or other images displayed on the input terminal 106 of the user’s 107 perusal…mobile device…screen associated with the vehicle…etc.” (¶ [0106]). This disclosure, at most, suggests that the visual path guidance elements are overlaid on top of a map or images of a pathway. This disclosure does not suggest “providing a world view visual rendering from the 3D domain that provides the visual path elements”, but rather that the visual path guidance elements are simply overlaid on top of a map or images of the pathway. Thus, for the reasons state above, the written description fails to disclose the claimed features identified above. For the sake of compact prosecution, the claim features described above will be treated as though they comply with the written description requirement and do not recite new matter, and will be interpreted under broadest reasonable interpretation in view of the specification paragraphs noted above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-12, and 14-22 are rejected under 35 U.S.C. § 103 as being unpatentable over Prasad et al. U.S. Publication No. 2023/0114688A1, hereafter known as Prasad, in view of Kim KR102511356B1, hereafter known as Kim. Claim 1: Prasad teaches the following: Receiving, at an automation computer system, a plurality of images of a vehicle from one or more imaging devices monitoring a gate; Processing the plurality of images with the automation computer system by: With a machine vision model of the automation computer system, determining one or more properties associated with the vehicle; Receiving a data file based on an interaction between a driver and an input terminal in communication with the automation computer system; Processing the data file with the automation computer system by: Generating a check-in event; With a machine learning model, validating the check-in event based at least in part on the one or more properties associated with the vehicle; and Prasad teaches “Discussed herein are systems and devices for automating and computerizing the check in and check out process at entry and exit locations of facilities […] a system including internet of things (IoT) computing devices may be positioned about entry and exit locations (e.g., check in and/or check out area) of a storage facility, shipping yard, processing plant, warehouse, distribution center, port, rail yard, rail terminal, and the like. The IoT computing device may be equipped with various sensor and/or image capture technologies, and configured to capture, parse, and identify vehicle and container information from the exterior of vehicles, containers, pallets, and the like” (¶ [0018]); “ the IoT computing devices and/or an associated cloud based service may utilize machine learning and/or deep learning models to perform the various tasks and operations” (¶ [0021]); “The sensors may be weather agnostic (e.g., may operate in foggy, rainy, or snowy conditions), such as via infrared image systems, radar based image systems, LIDAR based image systems, SWIR based image systems, Muon based image systems, radio wave based image systems, and/or the like. The IoT computing devices and/or the cloud-based services may also be equipped with models and instructions to capture, parse, identify, and extract information from the vehicles, containers, and/or various documents “(¶ [0021]); “the sensor system 102 and/or the cloud-based system 104 may request and receive verification data 120 from the third-party system 118 associated with a vehicle attempting to enter the facility. The verification data 120 may be utilized to verify or authenticate the driver, vehicle, and/or container using the extracted identifiers from the vehicle, container, chassis, and the like” (¶ [0033]); “In one specific example, the driver may be instructed to place or hold various documents in a manner that the documents are visible through, for instance, a front or side windshield. In other examples, the facility may include a document reader sensor located proximate to the entry and/or exit of the facility. The driver may then place the physical documents to be scanned within the reader sensor. In this manner, the sensor system 102 may capture sensor data 106 associated with various physical paperwork associated with the driver, vehicle, container, and/or contents of the container“ (¶ [0034]); “the cloud-based system 104 or other central computing system may be configured to, upon verification of the driver, vehicle, container, or the like, generate control signals 108 for the facility systems 110. For instance, the control signal 108 may cause a facility gate to open” (¶ [0031]). Thus, Prasad teaches a system for automating and computerizing the check in and check out process at entry and exit locations of facilities, where a facility gate may be opened upon verification of a driver and/or vehicle. Furthermore, the system comprises IoT computing devices equipped with various sensors and imaging devices configured to capture, parse, and identify vehicle and container information from the exterior of vehicles, containers, pallets, and the like; equivalent to receiving, at an automation computer system, a plurality of images of a vehicles from one or more imaging devices monitoring a gate and processing the plurality of images with an automation computer system. Furthermore, the system may request and receive verification data from third-party systems to verify/authenticate the driver, vehicle, and/or container using the extracted information from the sensors/imaging devices- where the IoT devices/cloud based system (comprising the various sensor/imaging devices) may utilize machine learning models to perform the operations. Additionally, the extracted information may include information collected from documents presented by a driver to a document reader sensor located proximate to the entry and/or exit of the facility. As such, upon verification of the driver, vehicle, and/or container, the system may control the facility gate to be opened. Therefore, the IoT devices (comprising various sensors/imagine devices) configured to capture, parse, extract, and verify the vehicle, driver, container, and document information (i.e., documents provided/input by the driver) using machine learning models in order to control access to a facility (i.e., opening a facility gate in response to verifying the extracted information against acquired verification data) is equivalent to determining one or more properties associated with the vehicle with a machine vision model of the automation computer system; receiving a data file based on an interaction between a driver and an input terminal in communication with the automation computer system, processing the data file with the automation computer system by: generating a check-in event ,and validating the check-in event based at least in part on the one or more properties associated with the vehicle with a machine learning model. In response to validating the check-in event, sending, by the processor of the automation computer system, an instruction to a processor of an access control device coupled to the gate and configured to operate the gate in response to the instruction processed by the processor of the access control device; and the access control device controlling the gate to open the gate and permit entry or egress to the vehicle. Prasad teaches “the cloud-based system 104 or other central computing system may be configured to, upon verification of the driver, vehicle, container, or the like, generate control signals 108 for the facility systems 110. For instance, the control signal 108 may cause a facility gate to open” (¶ [0031]); “control signals 108 […] may be transmitted between various systems using networks […] The networks 126-132 may be any type of network that facilitates compunction between one or more systems and may include one or more cellular networks, radio, WiFi networks […] and so forth” (¶ [0036]). As discussed further above with regard to claim 1, Prasad teaches a system for automating and computerizing the check in and check out process at entry and exit locations of facilities. Furthermore, the system comprises IoT computing devices equipped with various sensors and imaging devices configured to capture, parse, and identify vehicle and container information from the exterior of vehicles, containers, pallets, and the like. Furthermore, the system may request and receive verification data from third-party systems to verify/authenticate the driver, vehicle, and/or container using the extracted information from the sensors/imaging devices. As such, upon verification of the driver, vehicle, and/or container, the computer system may generate/communicate control signals over a computer network to a facility system such as to control a facility gate to be opened; equivalent to in response to validating the check-in event, sending, by the processor of the automation computer system, an instruction to a processor of an access control device coupled to the gate and configured to operate the gate in response to the instruction processed by the processor of the access control device; and the access control device controlling the gate to open the gate and permit entry or egress to the vehicle. Prasad does not explicitly teach determining a destination of the vehicle based on the validated check-in event; generating a guidance data file that comprises visual path guidance from a location of the vehicle to the determined destination, the visual path guidance comprising region and/or lane assignments determined using machine learning in a 3D domain and providing a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate, the visual rendering to be displayed by an electronic device associated with the vehicle; and sending, by a processor of the automation computer system, the guidance data file to the electronic device associated with the vehicle, the guidance data file generating visual elements to indicate a destination once executed by the electronic device associated with the vehicle. However, Kim teaches the following: Determining a destination of the vehicle based on the validated check-in event; Generating a guidance data file that comprises visual path guidance from a location of the vehicle to the determined destination, the visual path guidance comprising region and/or lane assignments determined using machine learning in a 3D domain, and providing a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate, the visual rendering to be displayed by an electronic device associated with the vehicle; Sending, by a processor of the automation computer system, the guidance data file to the electronic device associated with the vehicle, the guidance data file generating visual elements to indicate a destination once executed by the electronic device associated with the vehicle; Kim teaches “[t]he present invention relates to a technology incorporating artificial intelligence technology into a vehicle parking service […] the processor is configured to: receive identification information of an entering vehicle from a first camera installed at an entrance of a parking lot; check the reservation information for a reservation time and a reserved parking spot in response to the identification information; check the location of the vehicle from a plurality of second cameras installed inside the parking lot; generate route information based on the location of the entrance and exit of the parking lot and the reserved parking spot; generate guide information based on the location of the vehicle and the route information; and provide the guide information to a user terminal operating the vehicle” (see Abstract); “FIG. 3 is a diagram showing generation of route information according to an embodiment of the present invention” (¶ [0058]); “Figure 5 is a diagram showing guidance information in which an augmented reality image is superimposed on a vehicle image” (¶ [0061]); “Referring to FIG. 5 , the augmented reality image may be an arrow image indicating which location to go on the route information based on the location of the vehicle” (¶ [0062]); “the user terminal 200 can be a smartphone, and receiving guidance information can be a vehicle navigation device” (¶ [0040]); “the artificial intelligence-based vehicle parking reservation and guide service providing method according to an embodiment of the present invention may generate guide information based on the location of the vehicle and the route information” (¶ [0097]); “generating an augmented reality image corresponding to the position of the vehicle, the position of the third camera, and the angle of the third camera based on the route information, and superimposing the augmented reality image on the vehicle image to provide the guide information” (¶ [0010]). As disclosed by the applicant in the specification, “the guidance data file can include visual elements (e.g., arrows, boxes, etc.) overlaid on top of images from the imaging devices 102 showing the current state of the pathway…the guidance data file can generate visual elements overlayed on top of a map” (¶ [0105]), “the guidance file can be overlayed on top of a map or other images displayed on the input terminal 106 of the user’s 107 perusal…mobile device…screen associated with the vehicle…etc.” (¶ [0106]), and “region can optionally include one or more designated pathways 116 or more generally routes 110 for moving people, goods, or materials” (¶ [0050]). Thus, under broadest reasonable interpretation in view of the specification, the limitations involving “visual path guidance comprising region and/or lane assignments determined using machine learning in a 3D domain, and providing a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate” are interpreted as providing the visual path guidance as elements overlaid on top of a map or images of the current state of a pathway, wherein the region and/or lane assignments are designated pathways or a general route. Thus, Kim teaches a system which utilizes artificial intelligence to generate guide information for a vehicle to navigate to reserved parking spot. First, the system processor may receive identification information of an entering vehicle from a first camera installed at an entrance of a parking lot and check the reservation information for a reservation time and a reserved parking spot in response to the identification information. Subsequently, the system may generate route information based on the reserved parking spot, generate guide information based on the location of the vehicle and the route information, and provide the guide information to a user terminal operating the vehicle. In particular, the system (using artificial intelligence) generates an augmented reality image corresponding to the position of the vehicle, the position of a third camera, and the angle of the third camera based on the route information, and superimposes the augmented reality image on the vehicle image to provide the guide information. Moreover, Fig. 5 depicts the guidance information provided to the user terminal, wherein the rendered guidance information displays an arrow indicating the lane(s) to traverse by the vehicle in order to reach the parking spot; equivalent to determining a destination of the vehicle based on the validated check-in event; and generating a guidance data file that comprises visual path guidance from a location of the vehicle to the determined destination, the visual path guidance comprising region and/or lane assignments determined using machine learning in a 3D domain, and providing a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate, the visual rendering to be displayed by an electronic device associated with the vehicle. Moreover, these teachings are equivalent to sending, by a processor of the automation computer system, the guidance data file to the electronic device associated with the vehicle, the guidance data file generating visual elements to indicate a destination once executed by the electronic device associated with the vehicle. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the automated facility check-in system of Prasad the ability to determine a destination of the vehicle based on a validated check-in event, generate a guidance data file that comprises visual path guidance from a location of the vehicle to the determined destination where the visual path guidance comprises lane assignments determined using machine learning in a 3D domain, provide a world view visual rendering from the 3D domain that provides the visual path guidance by specifying the region and/or lane assignments associated with entry or egress through the gate where the visual rendering is displayed by an electronic device associated with the vehicle, and send the guidance data file to the electronic device associated with the vehicle that generates visual elements to indicate a destination once executed by the electronic device associated with the vehicle, as taught by Kim since the claimed invention is merely a combination of old elements. In the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Furthermore, one of ordinary skill in the art would have been motivated to make this modification to the system of Prasad with the purpose to “to reduce the travel time of the vehicle and thereby reduce the emission of exhaust gas, thereby preventing environmental pollution and preventing illegal parking” (¶ [0037]), as suggested by Kim. Claim 3: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad teaches the following: With the machine vision model, generating a three-dimensional model of the vehicle; Receiving a second set of images from the one or more imaging devices; With the machine vision model, generating a second three dimensional model of the vehicle based on the second set of images; and Generating an estimate of damage to the vehicle incurred for a duration between the plurality of images and the second set of images. Prasad teaches “The IoT computing devices and/or the cloud-based services may also be equipped with models and instructions to capture, parse, identify, and extract information from the vehicles, containers, and/or various documents […] the IoT computing devices and/or the cloud-based services may be configured to perform segmentation, classification, attribute detection, recognition, document data extraction, and the like […] the IoT computing devices and/or an associated cloud based service may utilize machine learning and/or deep learning models to perform the various tasks and operations” (¶ [0021]); “the IoT computing devices may use various types of a sensors (e.g., LIDAR, SWIR, Radio Wave, Muon, etc.), with capabilities such as but not limited to varying fields of view, along with the camera or image systems and edge computing capabilities to detect various attributes such as container damage, leakage, size, weight, and the like of an a vehicle, chassis, and/or container” (¶ [0025]); “the sensor systems 102 may be configured to detect the vehicle and capture sensor data 106 (e.g., video, images, and the like) associated with the vehicle, one or more driver(s) of the vehicle, and/or one or more container(s), crate(s), or pallet(s) associated with the vehicle“ (¶ [0028]); “the sensor data 106 may also be utilized to determine a state or status of the vehicle, container, chassis, or the like. For example, the state or status may be used to determine if damage occurred during shipping and/or if any repairs to the vehicle, container, or chassis are necessary before redeployment. In some instances, additional machine learned models may be employed by the sensor system 102 and/or the cloud-based system 104 to detect damage or other wear and tear of the vehicle, container, and/or chassis […] For instance, the sensor system 102 and/or the cloud-based system 104 may compare the captured sensor data 106 and/or the status output by the machine learned models to a recorded status of the vehicle, container, and/or chassis associated with the vehicle, container, and/or chassis at the time of deployment” (¶ [0030]); “machine learning algorithms can include, but are not limited to, […] Sammon Mapping, Multidimensional Scaling (MDS), “ (¶ [0024]). Thus, Prasad teaches that the IoT computing device may use various types of sensors and machine learning techniques (e.g., Sammon Mapping) to extract, segment, classify, detect, and recognize information from captured images/videos. As such, the IoT computing devices may determine various attributes of a vehicle using the machine learning techniques (including Sammon mapping), such as detecting damage or other wear and tear of the vehicle that occurred during shipping. One of ordinary skill in the art would recognize that Sammon mapping is a machine learning technique that maps high-dimensional data into a lower-dimensional space, such as providing a 2D or 3D representation of the captured/processed data; equivalent to generating a 3D model of the vehicle. Moreover, the system may detect the damage to the vehicle using these machine learning techniques by comparing the captured sensor data (i.e., images and videos) to a recorded status associated with the vehicle (i.e., previously captured sensor data); equivalent to with the machine vision model, generating a three-dimensional model of the vehicle, receiving a second set of images from the one or more imaging devices, generating a second three dimensional model of the vehicle based on the second set of images with the machine vision model,; and generating an estimate of damage to the vehicle incurred for a duration between the plurality of images and the second set of images. Claim 4: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad does not explicitly teach, however Kim does teach, the following: Wherein the guidance data file comprises generated visual elements that overlay onto existing mapping applications. Kim teaches “embodiments described above may be implemented as hardware components, software components, and/or a combination of hardware components and software components […] A processing device may run an operating system (OS) and one or more software applications running on the operating system” (¶ [0099]); “the processor is configured to: […] provide the guide information to a user terminal operating the vehicle” (see Abstract); “Figure 5 is a diagram showing guidance information in which an augmented reality image is superimposed on a vehicle image” (¶ [0061]); “Referring to FIG. 5 , the augmented reality image may be an arrow image indicating which location to go on the route information based on the location of the vehicle” (¶ [0062]); “The guidance information may be generated by overlapping augmented reality images” (¶ [0059]). Thus, Kim teaches a system configured to provide guidance information to a user terminal (i.e., a processing device running one or more software applications) that depicts a route to a parking spot. Moreover, Fig. 5 depicts the guidance information provided to the user terminal, wherein the rendered guidance information displays a mapped area with an arrow indicating the lane(s) to traverse by the vehicle in order to reach the parking spot; equivalent to wherein the guidance data file comprises generated visual elements that overlay onto existing mapping applications. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the automated facility check-in system of Prasad the ability to provide a guidance data file comprising generated visual elements that overlay onto existing mapping applications, as taught by Kim, since the claimed invention is merely a combination of old elements. In the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Furthermore, one of ordinary skill in the art would have been motivated to make this modification to the system of Prasad with the purpose to “to reduce the travel time of the vehicle and thereby reduce the emission of exhaust gas, thereby preventing environmental pollution and preventing illegal parking” (¶ [0037]), as suggested by Kim. Claim 5: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad teaches the following: Wherein the data file comprises an image of identification of the driver, the method comprising: With a machine vision model of the automation computer system, determining one or more properties of the identification from the image, and Wherein the validation of the check-in event is at least in part based on the determined one or more properties of the identification. Prasad teaches “the IoT computing devices and/or the cloud-based services may be configured to perform segmentation, classification, attribute detection, recognition, document data extraction, and the like. In some cases, the IoT computing devices and/or an associated cloud based service may utilize machine learning and/or deep learning models to perform the various tasks and operations” (¶ [0021]); “ as a vehicle (e.g., a truck, rail car, ship, or the like) approaches an entry or exit point of the logistics facility (e.g., a storage facility, warehouse, port, holding yard, commercial establishment, and the like), the sensor systems 102 may be configured to detect the vehicle and capture sensor data 106 (e.g., video, images, and the like) associated with the vehicle, one or more driver(s) of the vehicle, and/or one or more container(s), crate(s), or pallet(s) associated with the vehicle” (¶ [0028]); “The captured sensor data 106 may be used to verify the vehicle, driver, container or contents of the container, and the like. In some instances, the cloud-based system 104 may process the sensor data 106, for instance, using one or more machine learned model(s) to segment, classify, and identify the desired information (e.g., the driver’s identifier, the vehicle identifier, and/or the container identifier)” (¶ [0029]); “the sensor system 102 and/or the cloud-based system 104 may request and receive verification data 120 from the third-party system 118 associated with a vehicle attempting to enter the facility. The verification data 120 may be utilized to verify or authenticate the driver, vehicle, and/or container using the extracted identifiers from the vehicle, container, chassis, and the like” (¶ [0033]); “the sensor system 102 and/or the cloud-based system 104 may also preform facial or other biometric identification of the driver to assist with determining an identity without having to access driver documents, such as a license. In one specific example, the driver may be instructed to place or hold various documents in a manner that the documents are visible through, for instance, a front or side windshield. In other examples, the facility may include a document reader sensor located proximate to the entry and/or exit of the facility. The driver may then place the physical documents to be scanned within the reader sensor.” (¶ [0034]). Thus, Prasad teaches a system for automating and computerizing the check in and check out process at entry and exit locations of facilities. Furthermore, the system comprises IoT computing devices equipped with various sensors and imaging devices configured to capture sensor data (e.g., video, images, and the like) associated with a vehicle, driver, and/or container at an entry or exit point of the facility. As such, the system may utilize a machine learning model to analyze the sensor data and identify desired information (e.g., the driver’s identifier); equivalent to wherein the data file comprises an image of identification of the driver. Furthermore, the sensor system may perform facial or other biometric identification of the driver to verify the driver; equivalent to wherein the data file comprises an image of identification of the driver. Furthermore, the system is configured to perform segmentation, classification, attribute detection, recognition, document data extraction, and the like on the images; equivalent to a machine vision model. As such, the system may use one or more machine learning models to detect attributes and identify desired information (e.g., driver identifiers, vehicle identifiers, container identifiers) from the sensor data that is subsequently used to authenticate the vehicle/driver/container. Accordingly, upon authentication, the system may enable access to the facility by controlling a gate to open for the vehicle; equivalent to with a machine vision model of the automation computer system, determining one or more properties of the identification from the image, and wherein the validation of the check-in event is at least in part based on the determined one or more properties of the identification. Claim 6: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad teaches the following: With an interface of the automation computer system, retrieving existing information related to the vehicle or driver from a yard management system. Prasad teaches ““the sensor system 102 and/or the cloud-based system 104 may request and receive verification data 120 from the third-party system 118 associated with a vehicle attempting to enter the facility. The verification data 120 may be utilized to verify or authenticate the driver, vehicle, and/or container using the extracted identifiers from the vehicle, container, chassis, and the like” (¶ [0033]); “ as a vehicle (e.g., a truck, rail car, ship, or the like) approaches an entry or exit point of the logistics facility (e.g., a storage facility, warehouse, port, holding yard, commercial establishment, and the like), the sensor systems 102 may be configured to detect the vehicle and capture sensor data 106 (e.g., video, images, and the like) associated with the vehicle, one or more driver(s) of the vehicle, and/or one or more container(s), crate(s), or pallet(s) associated with the vehicle” (¶ [0028]); “The system 500 can include one or more communication interfaces(s) 502 that enable communication between the system 500 and one or more other local or remote computing device(s) or remote services” (¶ [0065]). As disclosed by the Applicant in the Specification, “The automation system 202 can be configured to enable direct communication with a third-party yard management solution (YMS)” (¶ [0055]). Thus, the yard management system is considered to be a third-party yard management system. Thus, Prasad teaches a cloud-based system (comprising a communication interface) that is configured to request and receive verification data (equivalent to existing information) from a third-party system associated with a vehicle/driver attempting to enter a facility (e.g., a holding yard). Authentication of the vehicle for entering the facility/holding yard is based on the verification data provided by the third-party system in addition to the sensor data; equivalent to with an interface of the automation computer system, retrieving existing information related to the vehicle or driver from a yard management system. Claim 7: Prasad/Kim teaches the limitations of claim 6. Furthermore, Prasad does not explicitly teach, however Kim does teach, the following: Wherein the destination is determined based on the existing information. Kim teaches “[t]he present invention relates to a technology incorporating artificial intelligence technology into a vehicle parking service […] the processor is configured to: receive identification information of an entering vehicle from a first camera installed at an entrance of a parking lot; check the reservation information for a reservation time and a reserved parking spot in response to the identification information; check the location of the vehicle from a plurality of second cameras installed inside the parking lot; generate route information based on the location of the entrance and exit of the parking lot and the reserved parking spot; generate guide information based on the location of the vehicle and the route information; and provide the guide information to a user terminal operating the vehicle” (see Abstract). Thus, Kim teaches a system which utilizes artificial intelligence to generate guide information for a vehicle to navigate to reserved parking spot. First, the system processor may receive identification information of an entering vehicle from a first camera installed at an entrance of a parking lot and check the reservation information for a reservation time and a reserved parking spot in response to the identification information. Subsequently, the system may generate route information based on the reserved parking spot, generate guide information based on the location of the vehicle and the route information, and provide the guide information to a user terminal operating the vehicle; equivalent to wherein the destination is determined based on the existing information. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the automated facility check-in system of Prasad the ability to determine a destination of the vehicle based on existing reservation information corresponding to a parking area, as taught by Kim, into the system of Prasad that is configured to request and receive verification data from third-party systems to verify/authenticate a driver/vehicle to enter a gated facility. One of ordinary skill in the art would have recognized that such a modification would further enable the system of Prasad to request and receive verification data from third-party systems to authenticate a driver/vehicle to enter a gated facility, determine a destination of the vehicle based on existing reservation information, and provide route guidance information to the vehicle based on the determined destination. One of ordinary skill in the art would have been motivated to make this modification with the purpose to further “reduce the travel time of the vehicle and thereby reduce the emission of exhaust gas, thereby preventing environmental pollution and preventing illegal parking” (¶ [0037]), as suggested by Kim. Furthermore, one of ordinary skill in the art would have been motivated to make this modification with the purpose of further helping in “reducing the congestion at the exit and entry points and allowing the vehicles and drivers to spend more time transporting goods” (¶ [0020]), as suggested by Prasad. Claim 8: Prasad/Kim teaches the limitations of claim 6. Furthermore, Prasad teaches the following: With the machine vision model, generating a three-dimensional model of the vehicle; and determining one or more properties associated with the vehicle based on the three-dimensional model of the vehicle. Prasad teaches “The IoT computing devices and/or the cloud-based services may also be equipped with models and instructions to capture, parse, identify, and extract information from the vehicles, containers, and/or various documents […] the IoT computing devices and/or the cloud-based services may be configured to perform segmentation, classification, attribute detection, recognition, document data extraction, and the like […] the IoT computing devices and/or an associated cloud based service may utilize machine learning and/or deep learning models to perform the various tasks and operations” (¶ [0021]); “machine learning algorithms can include, but are not limited to, […] Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), […] Sammon Mapping, Multidimensional Scaling (MDS)” (¶ [0024]); “the IoT computing devices may perform a data normalization using techniques such as threshold-based data normalization and machine learning algorithms to identify the driver, vehicle, or container” (¶ [0027]); “the IoT computing devices may use various types of a sensors (e.g., LIDAR, SWIR, Radio Wave, Muon, etc.), with capabilities such as but not limited to varying fields of view, along with the camera or image systems and edge computing capabilities to detect various attributes such as container damage, leakage, size, weight, and the like of an a vehicle, chassis, and/or container” (¶ [0025]); “the IoT computing devices may determine a weight and/or dimensions of the vehicle, trailer, chassis, or container, such as in the case of a weighing station on highways as well as at the check-in and/or check-out gates at warehouses, distribution centers, yards, etc. by analyzing the image data to determine, for instance, an actual distance between the chassis and the wheels” (¶ [0026]). Thus, Prasad teaches that the IoT computing device may use various types of sensors and machine learning techniques (e.g., Sammon Mapping) to extract, segment, classify, detect, and recognize information from captured images/videos. As such, the IoT computing devices may determine dimensions of a vehicle (using the machine learning techniques including Sammon mapping) and determine various attributes such as a distance between the chassis and the wheels of the vehicle based on the analysis of the images. Moreover, one of ordinary skill in the art would recognize that Sammon mapping is a machine learning technique that maps high-dimensional data into a lower-dimensional space, such as providing a 2D or 3D representation of the captured/processed data; equivalent to with the machine vision model, generating a three-dimensional model of the vehicle; and determining one or more properties associated with the vehicle based on the three-dimensional model of the vehicle. Claim 9: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad does not explicitly teach, however Kim does teach, the following: Receiving a plurality of images for a plurality of vehicles; and Processing the plurality of images with the automation computer system to determine optimal paths for the plurality of vehicles to respective destinations or to gates for ingress or egress. Kim teaches “[t]he present invention relates to a technology incorporating artificial intelligence technology into a vehicle parking service […] the processor is configured to: receive identification information of an entering vehicle from a first camera installed at an entrance of a parking lot; check the reservation information for a reservation time and a reserved parking spot in response to the identification information; check the location of the vehicle from a plurality of second cameras installed inside the parking lot; generate route information based on the location of the entrance and exit of the parking lot and the reserved parking spot; generate guide information based on the location of the vehicle and the route information; and provide the guide information to a user terminal operating the vehicle” (see Abstract); “the processor 110 derives a vehicle image of a third camera from which the vehicle is photographed, among the plurality of second cameras, and determines the position of the third camera and the vehicle image. Determines the location of the vehicle based on the path information, generates an augmented reality image corresponding to the location of the vehicle, the location of the third camera, and the angle of the third camera based on the route information, and the image of the vehicle The guidance information may be generated by overlapping augmented reality images” (¶ [0059]); “Through this, it is possible to check in which direction the vehicle should go to the reserved parking spot” (¶ [0060]); “the artificial intelligence-based vehicle parking reservation and guide service providing method according to an embodiment of the present invention may generate guide information based on the location of the vehicle and the route information” (¶ [0097]). Thus, Kim teaches a system which utilizes artificial intelligence to generate guide information for vehicles to navigate to reserved parking spots. The artificial intelligence based system may utilize a plurality of cameras to capture images of vehicles, which are used to determine current locations of the vehicles and generate the guidance information that is provided to a user terminal associated with a vehicle. Accordingly, it is possible for the system to check in which direction a vehicle should go to the reserved parking spot; equivalent to receiving a plurality of images for a plurality of vehicles; and processing the plurality of images with the automation computer system to determine optimal paths for the plurality of vehicles to respective destinations. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Prasad with the teachings of Kim by incorporating the features for obtaining a plurality of images of vehicles within a parking area and generating guidance information for navigating to a destination parking spot based on an analysis of the captured images using artificial intelligence, as taught by Kim, into the system of Prasad that is configured to authenticate a driver/vehicle before granting access to a gated facility based on captured images of the driver/vehicle. One of ordinary skill in the art would have recognized that such a modification would further enable the system of Prasad to capture images of a vehicle/driver at an entrance of a gated facility, authenticate the vehicle/driver based on the captured images, and responsively output an optimized route to a particular destination parking location within the gated facility. One of ordinary skill in the art would have been motivated to make this modification with the purpose of further helping in “reducing the congestion at the exit and entry points and allowing the vehicles and drivers to spend more time transporting goods” (¶ [0020]), as suggested by Prasad. Claim 10: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad teaches the following: In response to not receiving input from a sensor used to detect vehicles, stopping processing of the plurality of images by the automation computer system. Prasad teaches “as a vehicle (e.g., a truck, rail car, ship, or the like) approaches an entry or exit point of the logistics facility (e.g., a storage facility, warehouse, port, holding yard, commercial establishment, and the like), the sensor systems 102 may be configured to detect the vehicle and capture sensor data 106 (e.g., video, images, and the like) associated with the vehicle, one or more driver(s) of the vehicle, and/or one or more container(s), crate(s), or pallet(s) associated with the vehicle” (¶ [0028]); “The captured sensor data 106 may be used to verify the vehicle, driver, container or contents of the container, and the like [… ] the cloud-based system 104 may process the sensor data 106, for instance, using one or more machine learned model(s) to segment, classify, and identify the desired information (e.g., the driver’s identifier, the vehicle identifier, and/or the container identifier)” (¶ [0029]). Thus, Prasad teaches a system configured to detect and capture sensor data (i.e., video, images, and the like) of vehicles that are approaching an entry or exit point of a facility. The system is further configured to process the captured sensor data to verify the vehicle. One of ordinary skill in the art would recognize that if there are no vehicles approaching the entry or exit point of the facility, then no sensor data is captured or processed by the system; equivalent to in response to not receiving input from a sensor used to detect vehicles, stopping processing of the plurality of images by the automation computer system. Claim 12: Prasad teaches the following: One or more imaging devices; A processor; and A memory, in communication with the processor, the memory storing computer executable instructions that cause the processor to: Prasad teaches “Discussed herein are systems and devices for automating and computerizing the check in and check out process at entry and exit locations of facilities […] a system including internet of things (IoT) computing devices may be positioned about entry and exit locations (e.g., check in and/or check out area) of a storage facility, shipping yard, processing plant, warehouse, distribution center, port, rail yard, rail terminal, and the like. The IoT computing device may be equipped with various sensor and/or image capture technologies, and configured to capture, parse, and identify vehicle and container information from the exterior of vehicles, containers, pallets, and the like” (¶ [0018]); “The system 500 may include one or more processors 510 and one or more computer-readable media 512. Each of the processors 510 may itself comprise one or more processors or processing cores. The computer-readable media 512 is illustrated as including memory/storage.” (¶ [0068]); “Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 512 and configured to execute on the processors 510.” (¶ [0069]). Thus, Prasad teaches a system comprising a processor, computer-readable medium (memory) storing instructions executable by the processor, and IoT computing devices equipped with various image capture technologies; equivalent to one or more imaging devices, a processor, and a memory, in communication with the processor, the memory storing computer executable instructions that cause the processor to perform functions. The remaining limitations of claim 12 are substantially similar and analogous to the limitations of claim 1. Accordingly, the remaining limitations of claim 12 are rejected for the same reasons and rationale as discussed above with regard to claim 1. Claim 14: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 14 are substantially similar and analogous to the limitations of claim 3. Accordingly, claim 14 is rejected for the same reasons and rationale as discussed above with regard to claim 3. Claim 15: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 15 are substantially similar and analogous to the limitations of claim 4. Accordingly, claim 15 is rejected for the same reasons and rationale as discussed above with regard to claim 4. Claim 16: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 16 are substantially similar and analogous to the limitations of claim 5. Accordingly, claim 16 is rejected for the same reasons and rationale as discussed above with regard to claim 5. Claim 17: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 17 are substantially similar and analogous to the limitations of claim 6. Accordingly, claim 17 is rejected for the same reasons and rationale as discussed above with regard to claim 6. Claim 18: Prasad/Kim teaches the limitations of claim 17. Furthermore, the limitations of claim 18 are substantially similar and analogous to the limitations of claim 7. Accordingly, claim 18 is rejected for the same reasons and rationale as discussed above with regard to claim 7. Claim 19: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 19 are substantially similar and analogous to the limitations of claim 8. Accordingly, claim 19 is rejected for the same reasons and rationale as discussed above with regard to claim 8. Claim 20: Prasad teaches the following: A non-transitory computer readable medium comprising computer executable instructions, the computer readable medium, when executed by a processor, causing the processor to: Prasad teaches “The system 500 may include one or more processors 510 and one or more computer-readable media 512. Each of the processors 510 may itself comprise one or more processors or processing cores. The computer-readable media 512 is illustrated as including memory/storage.” (¶ [0068]); “Several modules such as instructions, data stores, and so forth may be stored within the computer-readable media 512 and configured to execute on the processors 510.” (¶ [0069]). Thus, Prasad teaches a system comprising a processor and computer-readable medium (memory) storing instructions executable by the processor; equivalent to a non-transitory computer readable medium comprising computer executable instructions, the computer readable medium, when executed by a processor, causing the processor to perform functions. The remaining limitations of claim 20 are substantially similar and analogous to the limitations of claim 1. Accordingly, the remaining limitations of claim 20 are rejected for the same reasons and rationale as discussed above with regard to claim 1. Claim 21: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 21 are substantially similar and analogous to the limitations of claim 9. Accordingly, claim 21 is rejected for the same reasons and rationale as discussed above with regard to claim 9. Claim 22: Prasad/Kim teaches the limitations of claim 12. Furthermore, the limitations of claim 22 are substantially similar and analogous to the limitations of claim 10. Accordingly, claim 22 is rejected for the same reasons and rationale as discussed above with regard to claim 10. Claim 11 is rejected under 35 U.S.C. § 103 as being unpatentable over Prasad, in view of Kim, in further view of Bautista et al. WO2018147519A1, hereafter known as Bautista. Claim 11: Prasad/Kim teaches the limitations of claim 1. Furthermore, Prasad does not explicitly teach, however Bautista does teach, the following: Receiving a second set of images of the vehicle from the one or more imaging devices; With the machine vision model, at least in part determining a presence of the vehicle in the second set of images; Determining a duration the vehicle spent inside a region based on the plurality of images and the second set of images; and Generating an invoice based on the determined duration. Bautista teaches “The sensing unit 21 is configured to capture the appearance image of the vehicle 10 entering the parking area or the vehicle 10 leaving the parking area, the vehicle number and the vehicle manufacturer's indication (e.g., the emblem or logo of the vehicle manufacturer) […] For example, the sensing unit 21 may include a general image sensor, an infrared sensor, a depth sensor for sensing a three-dimensional image, a motion sensor, and the like, but is not limited thereto […] the sensing unit 112 may further include an image processing unit for analyzing the captured image. Hereinafter, for the sake of explanation, the sensing unit 21 may be represented by an imaging device” (¶ [0074]); “a parking area management method comprising: obtaining information about a vehicle from an appearance of a vehicle entering a parking area […] Acquiring first encryption information corresponding to a vehicle using information about the vehicle” (¶[0017]); “acquiring information on the vehicle from the exterior of a vehicle to leave the parking area […] Generating second encryption information using information about the vehicle” (¶ [0022]); “the parking area management device 20 may determine the parking fee for the vehicle based on the interval between the time at which the first encryption information is generated and the time at which the second encryption information is generated . When the determined parking charge is settled, the parking area management device 20 can permit the passage of the vehicle. At this time, the payment of the parking fee can be performed in various ways such as cash, credit card, payment using the mobile terminal 11, account transfer, and the like” (¶ [0141). Thus, Bautista teaches a system comprising a sensing unit (i.e., an imaging device) that is configured to capture appearance images of a vehicle entering an area and exiting an area, and further configured to analyze the captured images to generate parking fees. One of ordinary skill in the art would recognize that a system (comprising an imaging device) that is configured to automatically capture and analyze images to detect particular information in the images and make particular decisions based on the captured images is equivalent to a machine vision model. In particular, the system may generate “first encryption information” based on the captured images of the vehicle entering the area, and generate “second encryption information” based on the captured images of the vehicle exiting/leaving the area. Moreover, the system may determine a parking fee for the vehicle based on the interval between the time at which the first encryption information is generated and the time at which the second encryption information is generated, where the payment of the fee can be performed using a mobile terminal; equivalent to receiving a second set of images of the vehicle from the one or more imaging devices; with the machine vision model, at least in part determining a presence of the vehicle in the second set of images; determining a duration the vehicle spent inside a region based on the plurality of images and the second set of images; and generating an invoice based on the determined duration. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include in the automated facility check-in system of Prasad the ability to generate a parking fee based on the interval between the time at which first encryption information is generated (i.e., data generated based on an analysis of a captured image of a vehicle entering a particular area) and the time at which second encryption information is generated (i.e., data generated based on an analysis of a captured image of a vehicle leaving a particular area), as taught by Bautista, since the claimed invention is merely a combination of old elements. In the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE G DEL TORO-ORTEGA whose telephone number is (571)272-5319. The examiner can normally be reached Monday-Friday 9:00AM-6:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shannon Campbell can be reached on (571) 272-5587. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE G DEL TORO-ORTEGA/Examiner, Art Unit 3628 /SHANNON S CAMPBELL/ Supervisory Patent Examiner, Art Unit 3628
Read full office action

Prosecution Timeline

Jun 05, 2024
Application Filed
Feb 12, 2025
Non-Final Rejection — §103, §112
May 26, 2025
Interview Requested
Jun 05, 2025
Examiner Interview Summary
Jul 10, 2025
Response Filed
Aug 23, 2025
Final Rejection — §103, §112
Oct 27, 2025
Response after Non-Final Action
Nov 21, 2025
Request for Continued Examination
Dec 08, 2025
Response after Non-Final Action
Dec 11, 2025
Non-Final Rejection — §103, §112
Mar 10, 2026
Response Filed
Mar 31, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602644
GRAPHIC USER INTERFACE FOR REAL-TIME CARGO HISTORY MANAGEMENT SERVICE BASED ON CARGO TRACKING API LINKAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12572449
Virtual Assistant Domain Selection Analysis
2y 5m to grant Granted Mar 10, 2026
Patent 12565113
ELECTRIC VEHICLE CHARGING ARRANGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12493849
MACHINE LEARNING-BASED PREDICTION OF ESTIMATED EQUIPMENT ARRIVAL TIMES IN A RAILROAD NETWORK
2y 5m to grant Granted Dec 09, 2025
Patent 12462313
ENERGY DISPATCH OPTIMIZATION USING A FLEET OF DISTRIBUTED ENERGY RESOURCES
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
18%
Grant Probability
48%
With Interview (+29.9%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month