Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Applicant’s submission filed 12/30/25 has been entered. Claims 1-20 are presented for examination.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6-11, 13-18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Grundfest et al. (US 20200254619 A1), in view of ROSAS-MAXEMIN et al. (WO 2023107584).
Re-claim 1, Grundfest et al. teach --a system for visualizing inventory of an intermodal container yard, the system comprising:
an inventory database storing a unit inventory, the unit inventory comprising information about a plurality of shipping containers physically located in the intermodal container yard;
(see e.g. [0048] - database of containership 106).
[0049] In an embodiment, the information system 113 may query a database of a containership 106 via a designated interface for the manifest of a previously identified container. Alternatively, the container may also be identified by its position on the containership 106 deck.)
one or more memory units; and
one or more computer processors communicatively coupled to the inventory database and the one or more memory units and configured to:
access the unit inventory stored in the inventory database;
(see e.g. [0049] In an embodiment, the information system 113 may query a database of a containership 106 via a designated interface for the manifest of a previously identified container. Alternatively, the container may also be identified by its position on the containership 106 deck.)
-provide one or more instructions to display, on an electronic display of a user computing system, a graphical user interface comprising a three-dimensional representation of the intermodal container yard;
(see e.g. [0101] Safety management of a smart container yard 100 may be a challenging task. Multiple manned, autonomous and teleoperated vehicles, as well aerial drones, containers in transfer and crane arms create a complex three-dimensional interplay that must be managed in a manner as to prevent collisions.
[0102] The container yard information system 113 may generate a space state representing physical objects in the container yard 100 and generate an operational plan based on the space state to include navigation paths that avoid physical objects. -=- . For example, responsive to the data acquired from the sensor arrays 107, the information system 113 subsequently builds and maintains a representation of the container yard 100 space as a cuboid populated by polygonal shapes enveloping physical objects in the container yard 100, further referred to as obstacles Ω. The information system 113 subsequently performs a 3D constrained Delaunay triangulation of vertices of obstacles ν.sub.i in the yard, which also includes edges of obstacles ϵ.sub.i as a part of the triangulation based on this representation of the space of the container yard. 3D Delaunay triangulations possess the empty sphere property, i.e. the circumscribing sphere of each cell produced by triangulation process contains no other vertices of the triangulation in its interior.
-update, in real time, the three-dimensional representation of the intermodal container yard based on the information about the plurality of shipping containers in the unit inventory,
(see e.g. [0102] the information system 113 subsequently builds and maintains a representation of the container yard 100 space as a cuboid populated by polygonal shapes enveloping physical objects in the container yard 100, further referred to as obstacles Ω. The information system 113 subsequently performs a 3D constrained Delaunay triangulation of vertices of obstacles ν.sub.i in the yard, which also includes edges of obstacles ϵ.sub.i as a part of the triangulation based on this representation of the space of the container yard. 3D Delaunay triangulations possess the empty sphere property, i.e. the circumscribing sphere of each cell produced by triangulation process contains no other vertices of the triangulation in its interior.
[0104] In an embodiment, the onboard computing system of a utility vehicle such as a forklift 114 or a container handler may sequentially transmit updates on forklift or spreader extension level to the container yard information system 113. This information may then be used to update the dynamic 3D obstacle map and path planning of other vehicles and drones.)
wherein updating the three-dimensional representation of the intermodal container yard comprises:
providing, in the three-dimensional representation of the intermodal container yard, a visual representation of each shipping container physically located in the intermodal container yard,
(see e.g. [0025] A smart container yard information system also maintains a representation of the current state of a container yard with the degree of precision and completeness desirable for safe operation utilizing a distributed array of stereo optical and infrared cameras, ranging devices such as sonars, LIDARs, radars, BLE beacons or other implements as well as computational systems for processing sensor measurements. A container yard space state may be represented as a cuboid populated by convex polygonal shapes enveloping physical objects in the container yard. Vehicles, structures and other objects may be identified by means such as Bluetooth low energy (BLE) beacons, radio frequency identification (RFID) tags, near field communication (NFC) tags, QR codes or other machine-readable decals, computer vision algorithms applied to human-readable IDs such as license plates or vehicle numbers.)
--providing, in the three-dimensional representation of the intermodal container yard, a visual representation of a current location of a vehicle travelling within the intermodal container yard; and
(see e.g. [0106] In an embodiment, the container yard information system 113 acquires information necessary to update dynamic obstacle map on positions and shapes of obstacles such as forklifts 114, yard trucks 115, aerial drones 116, wireless transmitters 109 and cranes 102, 103 from sensor arrays 107 such as optical or infrared camera arrays and computer vision techniques, or ranging sensors such as LIDARs, sonars or radars.)
[0028] In another embodiment, path homotopies in a dynamical 3D environment of a container yard are identified using methods such as constrained Delaunay triangulation, and coordinated path planning for transfer cranes, ground and aerial vehicles in the container yard is executed. Path homotopies may be refined to propose safe path solutions, which may subsequently be provided to human operators using methods such as augmented reality displays, or to machine intelligence agents via appropriate channels and protocols.)
--provide one or more instructions to display in the graphical user interface, in response to a user selection of a particular visual representation of a particular shipping container, metadata about the particular shipping container.
(see e.g. [0084] In a further embodiment, an aerial drone 116 may safely park in a designated location to conserve battery power responsive to a signal from the information system 113 that a loading or unloading operation of a container 301 has been initiated. Alternatively, an aerial drone 116 may provide an auxiliary exocentric video feed to a teleoperator controlling a transfer crane 102, as well as an optional set of controls to manipulate the sensor array 303.
Note: the video feed may contain metadata of the shipping container.
[0043] In an embodiment, the information system 113 may obtain a container identifier from a sensor array 107 or a service vehicle 114, 115, obtain a container manifest from a shipper or a third-party source, retrieve information corresponding to the identifier in the manifest, and use the retrieved information to determine dynamical properties of the container such as the center of mass.)
Although Grundfest et al. anticipate wherein the visual representation of each particular shipping container is placed at a virtual parking location in the three-dimensional representation of the intermodal container yard that corresponds to a physical parking location of the particular shipping container in the intermodal container yard;
(see e.g. claim 12 - wherein the augmented reality view includes a representation of information from the container yard information system describing physical objects in a vicinity of the truck.)
[0064] A number of methods of preparing information for teleoperator perception may be employed. Some such methods may involve overlays of text, geometric shapes, highlights and other augmented reality elements onto video feeds received from cameras in the container yard 100 to provide an augmented reality view of a perspective of the truck that may generate HUDs (heads-up displays) on a display device at the teleoperator workstation 117. -- Video feeds comprising a main teleoperator view may occupy the whole background of a monitor, while additional data such as virtual lane boundaries to follow, lines enveloping the expected vehicle position for a brief future time interval, representations of objects in a vicinity of the vehicle, shipment manifests, additional exocentric camera feeds or the speed limit to observe may be rendered on top of the main teleoperator view. In another embodiment, the HUD of the teleoperator workstation 117 is implemented as a projection of auxiliary information on the screen of a virtual reality headset.)
ROSAS-MAXEMIN et al. explicitly teach --wherein the visual representation of each particular shipping container is placed at a virtual parking location in the three-dimensional representation of the intermodal container yard that corresponds to a physical parking location of the particular shipping container in the intermodal container yard; and
(see e.g. [0019] As discussed above, in some examples computer 220 may use machine learning and computer vision algorithms on image and video data captured by camera 210 to identify shipping containers and open parking spots in a container yard. The next remaining step may be determining the location of the shipping containers and open parking spots in a container yard.
[0015] The container yard 100 includes one or more shipping containers 160, one or more parking spots 130, remote location 140, yard rig 120, and drone 110. The yard rig 120 and drone 150 each include locator system 110. In some aspects, the size (e.g., dimensions in three-dimensional space) of container yard 100 may vary depending on the number of shipping containers 160 and open parking spots 130. )
ROSAS-MAXEMIN et al. also teach -- a system for visualizing inventory of an intermodal container yard,
(see abstract --- Described are systems and techniques for locating and identifying shipping containers or open parking spots in a container yard using computer vision and machine learning algorithms, a camera, and position, navigation, and timing (PNT) sensors.
--provide one or more instructions to display in the graphical user interface, in response to a user selection of a particular visual representation of a particular shipping container, metadata about the particular shipping container.
(see e. g. [0015] In some aspects, computer vision and machine learning algorithms may be implemented on a yard rig or a drone to read identifying information on a shipping container such as a trailer ID or other unique identifying information. In some examples, computer vision and machine learning algorithms may be used to identify open parking spots or spaces to store shipping containers.
[0016] The locator system 110 (e.g., attached to the one or more yard rigs 120 or one or more drones 150) may allow real-time visibility of the location of every shipping container 160 in the container yard 100).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Grundfest et al., and include the steps cited above, as taught by ROSAS-MAXEMIN et al., in order to locate and identify all the shipping containers and open parking spots (see e.g. [0020]).
Re-claim 2, Grundfest et al. teach --The system for visualizing inventory of the intermodal container yard of Claim 1, wherein the metadata about the particular shipping container comprises one or more of:
an indication of a particular physical parking location of the particular shipping container;
a current image of the particular physical parking location;
a current carrier identification of the particular shipping container;
a current identification number of the particular shipping container;
a timestamp of the current image.
(see e.g. [0102] In an embodiment the container yard information system 113 may utilize a distributed array 107 of sensors such as stereo cameras, ranging devices such as sonars, LIDARs, or radars, BLE beacons, NFC tags and other implements to maintain a representation of the current state of the container yard 100 with the required degree of precision and completeness. The container yard information system 113 may generate a space state representing physical objects in the container yard 100 and generate an operational plan based on the space state to include navigation paths that avoid physical objects.)
[0045] In an embodiment, the information system 113 may apply computer vision methods to read a license plate of the carrier truck 103 carrying a container and subsequently query a database of vehicles allowed entry to the container yard 100 for the license plate number parsed in order to perform container identification.
[0105] In an embodiment, the onboard computing system of a utility vehicle such as a forklift 114 or a container handler may transmit the commencement timestamp of extension or other operations changing the shape of a utility vehicle and metadata describing the process such as extension speed and the expected extension length to the container yard information system 113.)
Re-claim 3, Grundfest et al. teach --The system for visualizing inventory of the intermodal container yard of Claim 2, wherein the metadata about the particular shipping container further comprises one or more of:
a previous image of the particular physical parking location;
a previous carrier identification of the particular shipping container;
a previous identification number of the particular shipping container; and
a timestamp of the previous image.
(see e.g. 0049] In an embodiment, the information system 113 may query a database of a containership 106 via a designated interface for the manifest of a previously identified container. Alternatively, the container may also be identified by its position on the containership 106 deck.)
Re-claim 4, Grundfest et al. do not teach the limitation as claimed.
However, ROSAS-MAXEMIN et al. teach --The system for visualizing inventory of the intermodal container yard of Claim 2, wherein the indication of the particular physical parking location of the particular shipping container comprises:
a lot identification;
a row identification; and
a spot identification.
(see e. g. [0017] In some examples, the designated set of coordinates or location (e.g., latitude and longitude) of every shipping container and parking spot (e.g., one or more shipping containers 160 and one or more parking spots 130 as illustrated in FIG. 1) may be stored (e.g., stored in memory of computer 220) in locator system 200 and accessible by computer 220.
[0018] Examples of open parking spot identification features may include, but are not limited to, alphanumeric identifiers in the parking spot and parking lot striping features. -- In some examples, the computer 220 may use image and video data captured by camera 210 and process the data using computer vision and machine learning algorithms, which may be trained to detect particular features of the surrounding environment such as shipping container and open parking spot identification features.
[0020] As discussed above, in some examples, the location may be a geographic coordinate using latitude and longitude or another type of coordinate system to indicate the location of the shipping containers in the yard. The same principle may apply to determine the location of open parking spots.)
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Grundfest et al., and include the steps cited above, as taught by ROSAS-MAXEMIN et al., in order to locate and identify all the shipping containers and open parking spots (see e.g. [0020]).
Re-claim 6, Grundfest et al. teach -the system for visualizing inventory of the intermodal container yard of Claim 1, wherein the visual representation of each shipping container physically located in the intermodal container yard is based on a carrier identification of the particular shipping container.
(see e.g. [0039] A smart container yard information system 113 maintains a representation of the current state of a smart container yard 100 with a degree of precision and completeness that enables safe operation of the vehicles and equipment in the container yard 100. The informational system 113 may furthermore maintain, or receive in real-time, operational signals representative of an operational plan for carrying out operations in the container yard 100. For example, the operational plan may determine an identifier a carrier truck 103 at the entry gate 110 (or other vehicle in the container yard 100) and perform a lookup of the identifier in a vehicle database to obtain the operational plan. )
Re-claim 7, Grundfest et al. teach --the system for visualizing inventory of the intermodal container yard of Claim 1, wherein the vehicle comprises:
a container delivery vehicle;
an aerial vehicle;
an automobile;
a truck;
a golf cart;
an all-terrain vehicle (ATV);
an autonomous vehicle;
a motorcycle; or
a remote-control vehicle.
(see e.g. [0053] For instance, an aerial drone 116 accompanying a yard truck 115 may be forced to hold its position
[0077] In an embodiment, a driver of a carrier truck 103 and any accompanying personnel may be transported between staging areas attached to gates 110, 111 by autonomous ground vehicles.
[0037] Alternatively, the carrier truck 103 may comprise an autonomous or semi-autonomous vehicle that includes an autonomous driving system that automatically controls navigation responsive to sensed environment conditions.
[0040] In an embodiment, the information system 113 may use manual modelling performed according to technical specifications, analysis of measurements acquired by sensor arrays 107 or imported vendor datasets, or other sources to produce physical models of ground vehicles (such as forklifts 114 and yard trucks 115) and aerial drones 116 servicing the container yard 100 as well as of visiting trailer trucks 103.)
Claims 8, 15 recite similar limitations as claim 1 and are therefore rejected under the same arts and rationale.
Claims 9, 16 recite similar limitations as claim 2 and are therefore rejected under the same arts and rationale.
Claims 10, 17 recite similar limitations as claim 3 and are therefore rejected under the same arts and rationale.
Claims 11, 18 recite similar limitations as claim 4 and are therefore rejected under the same arts and rationale.
Claims 13, 20 recite similar limitations as claim 6 and are therefore rejected under the same arts and rationale.
Claim 14 recites similar limitations as claim 7 and is therefore rejected under the same arts and rationale.
Claims 5, 12, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Grundfest et al. (US 20200254619 A1), in view of ROSAS-MAXEMIN et al. (WO 2023107584)., in further view of TIAN et al. (CN 113496563 A).
Re-claim 5, Grundfest et al., teach the metadata .
(see e.g. [0105] In an embodiment, the onboard computing system of a utility vehicle such as a forklift 114 or a container handler may transmit the commencement timestamp of extension or other operations changing the shape of a utility vehicle and metadata describing the process such as extension speed and the expected extension length to the container yard information system 113.
[0045] In an embodiment, the information system 113 may apply computer vision methods to read a license plate of the carrier truck 103 carrying a container and subsequently query a database of vehicles allowed entry to the container yard 100 for the license plate number parsed in order to perform container identification.
[0105] In an embodiment, the onboard computing system of a utility vehicle such as a forklift 114 or a container handler may transmit the commencement timestamp of extension or other operations changing the shape of a utility vehicle and metadata describing the process such as extension speed and the expected extension length to the container yard information system 113.)
Grundfest et al., in view of ROSAS-MAXEMIN et al., do not teach the following limitation as claimed.
However, TIAN et al. et al. teach -- The system for visualizing inventory of the intermodal container yard of Claim 2, wherein the metadata about the particular shipping container further comprises a confidence value associated with the current carrier identification of the particular shipping container.
(see e.g. TIAN et al. -Exemplary, when determining the target container image from the multi-frame container image, the terminal device can determine the confidence of the container identification in each frame of container image through the neural network, and according to the confidence of the container identification in each frame of container image; A target container image is determined from a multi-frame container image. For example, the container image of the maximum confidence of the container identification, determining as the target container image. or determining part of container image (such as odd frame image in all container images, or even number frame image in all container images) by the neural network; and determining the target container image from the multi-frame container image according to the confidence degree of the container identifiers. For example, the container image of the maximum confidence of the container identification, determining as the target container image.
Exemplary, based on confidence of each container identification, if the maximum confidence is greater than the preset threshold value, the terminal device can the container identification of the container image of the maximum confidence, determining as the target container image. or, if the maximum confidence level is not greater than the preset threshold value, the terminal device determines that all container images are not the target container image, namely cannot obtain the container identifier from the container image.)
If the container vehicle enters the smart gate according to the first acquisition time and the second acquisition time, determining the container identification of the container vehicle according to the target container image, and determining the license plate identification of the container vehicle according to the target vehicle image;
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Grundfest et al., in view of ROSAS-MAXEMIN et al., and include the steps cited above, as taught by TIAN et al., in order to improve the identification accuracy of container identification and license plate identification, and also determine that the container vehicle is legal container vehicle according to the container identification and the license plate identification (see e.g. TIAN et al.).
Claims 12, 19 recite similar limitations as claim 5 and is therefore rejected under the same arts and rationale.
Response to arguments
Applicant’s argument:
Applicant respectfully submits that independent Claim 1 is allowable at least because the proposed Grundfest-Rosas-Maxemin combination fails to disclose, teach, or suggest, expressly or inherently, elements specifically recited in Applicant's claims. For example, the proposed Grundfest-Rosas-Maxemin combination fails to disclose at least the following features recited in Claim 1:
provide one or more instructions to display, on an electronic display of a user computing system, a graphical user interface comprising a three-dimensional representation of the intermodal container yard;
update, in real time, the three-dimensional representation of the intermodal container yard based on the information about the plurality of shipping containers in the unit inventory.
Examiner’s response:
Grundfest et al. teach a three-dimensional representation of the container yard;
(see e.g. [0102] The container yard information system 113 may generate a space state representing physical objects in the container yard 100 and generate an operational plan based on the space state to include navigation paths that avoid physical objects. -=- . For example, responsive to the data acquired from the sensor arrays 107, the information system 113 subsequently builds and maintains a representation of the container yard 100 space as a cuboid populated by polygonal shapes enveloping physical objects in the container yard 100, further referred to as obstacles Ω. The information system 113 subsequently performs a 3D constrained Delaunay triangulation of vertices of obstacles ν.sub.i in the yard, which also includes edges of obstacles ϵ.sub.i as a part of the triangulation based on this representation of the space of the container yard. 3D Delaunay triangulations possess the empty sphere property, i.e. the circumscribing sphere of each cell produced by triangulation process contains no other vertices of the triangulation in its interior.)
Grundfest et al. teach updating the three-dimensional representation of the container yard;
(see e.g. [0104] In an embodiment, the onboard computing system of a utility vehicle such as a forklift 114 or a container handler may sequentially transmit updates on forklift or spreader extension level to the container yard information system 113. This information may then be used to update the dynamic 3D obstacle map and path planning of other vehicles and drones.
With respect to the real time update of the three-dimensional representation of the intermodal container yard, based on the following teachings of Grundfest et al., the system acquires real-time data to update dynamic obstacle map.
(see e.g. [0058] In another embodiment, the remote support server 112 may comprise an artificial intelligence agent that does not necessarily require a teleoperator workstation 117 with a display or physical controls for providing human input. Here, the remote support server 112 may provide control instructions to the vehicle directly based on the processing of a real-time video feed and other sensor data without necessarily utilizing any human input.
[0038] The carrier truck 103 may also optionally comprise various sensors such as optical or infrared cameras, ranging devices such as LIDAR, sonar or radar units, or other sensor types allowing real-time acquisition of data on the smart container yard 100, components and occupants of the truck 103, and streaming the acquired data over one or more networks 108 to a remote support server 112. The data acquired by sensors integrated with a carrier truck 103 may augment and expand the data acquired by the standalone sensor arrays 107 set up in the smart container yard 100.
[0106] In an embodiment, the container yard information system 113 acquires information necessary to update dynamic obstacle map on positions and shapes of obstacles such as forklifts 114, yard trucks 115, aerial drones 116, wireless transmitters 109 and cranes 102, 103 from sensor arrays 107 such as optical or infrared camera arrays and computer vision techniques, or ranging sensors such as LIDARs, sonars or radars.)
Response to Arguments
Applicant's arguments filed 12/30/26 have been fully considered but they are not persuasive.
Applicant’s argument:
Respectfully, the Examiner has not provided an adequate explanation as to whyone of ordinary skill in the art at the time of invention would have combined the various teachings of the references in the manner the Examiner proposes. Nor does the Examiner state how these systems can be combined and, if combined, would be successfully combined. Thus, Applicant respectfully submits that the Examiner's-19-
attempt to combine Grundfest with Rosas-Maxemin appears to constitute the type of impermissible hindsight reconstruction of Applicant's claims, using Applicant's claims as a blueprint, that is specifically prohibited by the M.P.E.P. and governing Federal Circuit cases.
Examiner’s response:
Applicants contention that the rejection relies on hindsight reconstruction is unavailing. It must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But, whereas here, the rejection takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Further, reason for the modification/combination need not appear in one or more of relied upon references. Instead, the examiner when analyzing the evidence may employ common sense not inconsistent with the ordinary level of knowledge and skill in the art at the time of the invention. See Perfect Web Techs. v. InfoUSA, Inc., 587 F.3d 1324, 1328-29 (Fed. Cir. 2009). Reliance on common sense does not imply the use of impermissible hindsight.
Additionally, it is noted that KSR forecloses the argument that a specific teaching, suggestion, or motivation is required to support a finding of obviousness. Under KSR, a claim would have been obvious if the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention. Furthermore, under KSR, a claim would have been obvious if a particular known technique was recognized as part of the ordinary capabilities of one skilled in the art. Thus the claimed subject matter likely would have been obvious under KSR.
Grundfest et al. teach building and maintaining/updating a representation of a container yard.
(see abstract--A smart container yard includes systems for intelligently controlling operations of vehicles in the container yard using teleoperation and/or autonomous operations.
[0102] In an embodiment the container yard information system 113 may utilize a distributed array 107 of sensors such as stereo cameras, ranging devices such as sonars, LIDARs, or radars, BLE beacons, NFC tags and other implements to maintain a representation of the current state of the container yard 100 with the required degree of precision and completeness. The container yard information system 113 may generate a space state representing physical objects in the container yard 100 and generate an operational plan based on the space state to include navigation paths that avoid physical obejcts. For example, responsive to the data acquired from the sensor arrays 107, the information system 113 subsequently builds and maintains a representation of the container yard 100 space as a cuboid populated by polygonal shapes enveloping physical objects in the container yard 100, further referred to as obstacles Ω. The information system 113 subsequently performs a 3D constrained Delaunay triangulation of vertices of obstacles ν.sub.i in the yard, which also includes edges of obstacles ϵ.sub.i as a part of the triangulation based on this representation of the space of the container yard.
[0104] This information may then be used to update the dynamic 3D obstacle map and path planning of other vehicles and drones.
Rosas-Maxemin et al. teach tracking shipments using computer and navigation sensors in a dynamic, mobile environments.(see abstract). Furthermore, Grundfest et al. teach providing solutions for locating shipping containers in a container yard by utilizing at least one of computer vision, machine learning algorithms, navigation sensors, or a combination thereof. (see [0015]).
Mapping of the containers are presented in a three-dimensional space
locator system 200 may determine the location of the shipping containers and prevent them from being lost within the container yard. In some cases, locator system 200 may include a mapping of the container yard (e.g., stored in computer system 220). In other words, locator system 200 may include stored data regarding the container yard including, but not limited to, pavement markings, street markings, lane markings, and dimensions. --The three coordinate axes may represent three-dimensional space. Examples of systems with three coordinate axes may include, but are not limited to, a Cartesian coordinate system, a cylindrical coordinate system, and a spherical coordinate system. Inertial Measurement Unit (IMU) 260 which can include one or more accelerometers 270 and one or more gyroscopes (gyros) 280. In some examples, PNT sensors 250 may be used by computer 220 to determine the position in three- dimensional (3D) space of the object (e.g., yard rig 120 or drone 150 as illustrated in FIG. 1), they are attached to. (see [0017]).
Both Grundfest et al. and Rosas-Maxemin et al. provide information about container yards. For at least claim 1, Grundfest et al. teach the claimed limitations except for explicitly visualizing inventory and providing metadata about particular shipping container.
Rosas-Maxemin et al. teach visualizing inventory and providing metadata about particular shipping container.
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify Grundfest et al., and include the steps cited above, as taught by ROSAS-MAXEMIN et al., in order to locate and identify all the shipping containers and open parking spots (see e.g. [0020]).
Because it would have been obvious to include provide specific information about the shipping containers as taught by ROSAS-MAXEMIN et al., in the system of Grundfest et al., the rejection is maintained.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to LUNA CHAMPAGNE whose telephone number is (571)272-7177. The examiner can normally be reached M-F 8:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Florian Zeender can be reached at 571 272-6790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/LUNA CHAMPAGNE/Primary Examiner, Art Unit 3627
March 26, 2026