Prosecution Insights
Last updated: April 19, 2026
Application No. 18/745,884

AUGMENTED REALITY VISUALIZATION EMBARKATION METHOD AND SYSTEM

Non-Final OA §102§103§112
Filed
Jun 17, 2024
Examiner
SONNERS, SCOTT E
Art Unit
2613
Tech Center
2600 — Communications
Assignee
The United States Of America AS Represented By The Secretary Of The Navy
OA Round
1 (Non-Final)
69%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
81%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
258 granted / 375 resolved
+6.8% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
400
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “bin packing module to provide”, “digital twin module renderer to generate 3D renderings”, “tracking module to track”, and “measurement module to measure and capture dimensions” in claim 17 and “interface module to obtain” in claim 18. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The various “modules” serve as generic placeholders equivalent to a means for performing the function following each module. In the context of the claims and the inventions, such functions are understood to be computer-implemented function where specialized hardware is a corresponding structure for the functions recited and disclosed. Here the Specification at paragraphs 0033-0036 disclose the structure for the specialized instructions as computing hardware components that perform the respective functions. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-8, 11 and 17-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The term “if necessary” in claim 1 is a relative term which renders the claim indefinite. The term “necessary” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Here use of the term “necessary” dictates under what conditions the scope of the claim requires viewing of the 3D model and also modifying it and other files, versus only requiring to view the 3D model on the condition that it is not “necessary” to modify the module. Whether something is necessary and for what purpose something is necessary is relative to not only some specific goal state of necessity being met, but also to the actor assigning or determining whether something is “necessary” for any of various purposes. Here the actor determining whether something is necessary is left undefined as is any standard for determining when modification is or is not necessary. Furthermore it makes it unclear as to whether it requires the user to view and modify the 3D model until some necessary condition is met, or whether they may simply view the 3D model if it is not necessary in their opinion to modify it further. Note that the Specification does not provide any standard for how such determination of when it is necessary or not to modify a viewed 3D model and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Thus the scope of the claim is not clear. Note that claims 1-8 are rejected for carrying through this deficiency of the parent claim. Claim 3 recites the limitation "another external process" in line 3. There is insufficient antecedent basis for this limitation in the claim. The claim does not previously recite any other specific external process or name any specific function as even an initial “process”, such that it is not clear what “another” is in reference to, nor what “external” is in reference to. Furthermore, the claim is confusingly worded as “which are provided by another external process” uses “are” instead of “is”, implying that it refers to the state of plural elements, but the system retrieves “a digital twin” which makes it seem as if this cannot be the plural elements referred to by “which are provided by another external process”, However it is considered that this could refer to the “one or more of the objects” such that retrieving a digital twin comprises retrieving multiple digital twins, and these will be interpreted as in some manner “provided by another external process”. In the interest of compact prosecution the Examiner will interpret the claim as if it recites “wherein a digital twin of one or more of the objects is provided by an external process” which would render the claim definite for both issues above. Claim 11 recites the same limitation and is rejected and interpreted in the same manner above. Claim 17 recites the limitation "the equipment" in line 6. There is insufficient antecedent basis for this limitation in the claim. While the claim previously recites a plurality objects which could be considered equipment, the fact that such objects are not previously defined as equipment leads to indefiniteness as it is not clear as to whether such objects are the equipment or whether equipment may refer to a larger set beyond the plurality of objects or to a different set of objects which are not defined in the claim. In the interest of compact prosecution, the Examiner will interpret the claim as if it reads “equipment” instead such that the broadest meaning is preserved and would be definite in that it would include a scope in which equipment could comprise the plurality of objects as well as other objects falling within the broadest reasonable interpretation of equipment. Note that claims 18-20 necessarily contain the same deficiency as the parent claim and do not cure the deficiency and are rejected for the same reasons. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-5, 7-13 and 15-16 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Gualtieri et al1 (“Gualtieri”). Regarding claim 1, Gaultieri teaches a method of generating embarkation files used by a 3D augmented reality visualization embarkation system for a user to pack a container including a plurality of objects (note that such generation of embarkation files is addressed by the rejection of the limitations in the body of the claim below where such generating of embarkation files is a function addressed below), the method comprising: using a Light Detection and Ranging (LiDAR) scanner, scanning objects of an equipment set to be packed into the container and generating a digital twin of each scanned object (note that a LiDAR scanner is any scanner that utilizes light detection and ranging such that if light in any form is detected and utilized in connection with some ranging operation in relation to an object then such device may be considered a LiDAR scanner; see Gaultieri, paragraphs 0013-0016 teaching “the packing data sources that provide the information related to the parameters associated with the set of items to be loaded into the three-dimensional container can include any suitable device or combination of devices that can capture or provide such information” where “Additionally, or alternatively, an augmented reality device, a mixed reality device, a hand-held laser scanner, a structured light scanner, and/or another suitable device having appropriate three-dimensional scanning capabilities can be used to collect data relating to the size and shape of the items to be loaded into the three-dimensional container” such that here use of such a hand-held laser scanner or structured light scanner may be considered LiDAR scanners utilizing detection of light in some manner to acquire ranging information of a plurality of objects which are to be packed into the 3D container where such set of items may be considered an equipment set and using such scanner on the objects of the equipment set is involved in generating a digital twin of each scanned object where for example “the parameters associated with the set of items to be loaded into the three-dimensional container can include a quantity of items to be loaded into the three-dimensional container, a size (e.g., a length, width, and height) and/or shape of the items to be loaded into the three-dimensional container, a weight of the items to be loaded into the three-dimensional container, and/or the like. Furthermore, in some implementations, one or more of the items can be associated with a robustness parameter, which can indicate whether a given item is durable or fragile, sturdy or flimsy, able or unable to support weight, and/or the like” and as in paragraph 0017 “the load planning platform can generate a packing plan based on the information relating to the constraints associated with the three-dimensional container, the parameters related to the items to be loaded into the three-dimensional container, and a sequence in which the set of items are to be unloaded from the three-dimensional container (e.g., based on a delivery route or delivery sequence, which can be input by a user, obtained from a route planning device, and/or the like)” such that here the digital representation of the parameters that match the parameters of the object shape, dimensions, etc., and that are packed into the 3D containers are digital twins of each scanned object; see paragraph 0030 and figure 1D showing 3D renderings of such items which are digital twins of the objects arranged in a packing solution in the container ); storing in a digital twin database the generated digital twin data of each object of the equipment set (note that a “digital twin database” is any database or equivalent data storage space that can be seen as storing digital twin data in any manner; thus see Gaultieri, paragraphs 0012-0016 teaching “load planning platform, one or more packing data sources, one or more client devices, a three-dimensional container, and a set of items to be loaded into the three-dimensional container” and “load planning platform can determine various parameters and/or constraints associated with the three-dimensional container and the set of items to be loaded into the three-dimensional container based on information received from the one or more packing data sources” where as in paragraphs 0035-0048 and figure 2 the “load planning platform” which receives the generated digital twin above, is one that includes “storing” of such data where “Load planning platform 230 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with load planning for a three-dimensional container. For example, load planning platform 230 can receive information related to loading constraints for the three-dimensional container and parameters for a set of items to be loaded into the three-dimensional container and generate a preliminary packing solution based on the received information in combination with information related to a sequence in which the set of items are to be unloaded from the three-dimensional container. Load planning platform 230 can further generate a set of packing solutions by applying, to the preliminary packing solution, one or more available moves that change a simulated placement in the three-dimensional container for one or more items in the set of items. Accordingly, load planning platform 230 can select a final packing solution from the set of packing solutions based on one or more optimization criteria and provide client device 210, vehicle device 220, and/or another suitable device with access to a three-dimensional rendering of the final packing solution and instructions for implementing the final packing solution based on the sequence in which the set of items are to be unloaded from the three-dimensional container”); generating an embarkation data file including embarkation data required for shipping the equipment set, the embarkation data file including embarkation data associated with the equipment set and digital twin data representations of each of the objects included in the equipment set, the embarkation data file having a format that is useable by a program for manual viewing and manual editing (note that an “embarkation data file” is interpreted to be any data file including the data recited (where a file is simply some collection of data treated as some type of functional unit by any system in which it is managed) which can be seen to relate to “embarkation” in any manner where the broadest reasonable interpretation of “embarkation” is starting some course of action which could include for example, but does not require some aspect of something boarding a vehicle, where in such case the boarding is simply a specific case of the course of action implied by boarding such a vehicle; see Gaultieri, paragraph 0017 and figure 1A, teaching “the load planning platform can generate a packing plan based on the information relating to the constraints associated with the three-dimensional container, the parameters related to the items to be loaded into the three-dimensional container, and a sequence in which the set of items are to be unloaded from the three-dimensional container (e.g., based on a delivery route or delivery sequence, which can be input by a user, obtained from a route planning device, and/or the like)” such that here this packing plan and its associated data is an embarkation data file which includes the embarkation data required for shipping the equipment set such as the constraints and parameters and includes the digital twin representations of each object such that they can be arranged in 3D space in the container space in some at least preliminary packing solution and this file is useable by a program from manual viewing and manual editing by the user such as in paragraphs 0024-002 teaching “the preliminary packing solution generated using the heuristic technique can be improved upon during a subsequent optimization phase (to be described in further detail below), which allows the heuristic technique to arrive at the preliminary packing solution in a fast and efficient manner at the possible expense of finding an exact or perfect solution” where “the load planning platform can attempt to optimize (e.g., improve upon) the preliminary packing solution by identifying a set of available moves among the packed items that have a simulated placement in the three-dimensional container and/or unpacked items that were left out by the heuristic technique” and such embarkation data file can be used as in paragraph 0031 which uses the data in the format received for manual viewing and manual editing where “the load planning platform can provide one or more client devices with access to the three-dimensional rendering of the best packing solution along with instructions for implementing the best packing solution. For example, in some implementations, the one or more client devices can display the three-dimensional rendering on a suitable display device, and the three-dimensional rendering can be rotatable, draggable, zoomable, and/or the like, thus allowing a user to easily search for a particular item or set of items and to get an overall impression of the final packing solution. Furthermore, as shown in FIG. 1D, the three-dimensional rendering can be provided with one or more user interface elements (e.g., a slider in the illustrated example) that allows the user to view the sequence in which the items are to be loaded into and unloaded from the three-dimensional container. In some implementations, the user interface elements can also allow the user to modify the sequence in which the items are to be loaded into and unloaded from the three-dimensional container, which can result in the load planning platform automatically recomputing the packing solution according to the modified sequence”); processing the embarkation data file using a 3D bin packing algorithm configured to provide a packing solution to optimally pack all the objects of the equipment set in one or more customizable container configurations, the packing solution including the location of each object in the one or more customizable container configurations (see Gualtieri, paragraphs 0012-0017 teaching “the load planning platform can determine various parameters and/or constraints associated with the three-dimensional container and the set of items to be loaded into the three-dimensional container based on information received from the one or more packing data sources” and “can use the various parameters and/or constraints to generate a preliminary packing solution according to a heuristic technique by attempting to place unpacked items into the three-dimensional container until all items are placed in the three-dimensional container or no further items can be placed in the three-dimensional container without violating one or more loading rules” where such a packing solution is considered optimal with respect to any initial or later heuristic where for example “place unpacked items into the three-dimensional container until all items are placed in the three-dimensional container or nor further items can be placed…without violating one or more loading rules” is an initial packing solution that provides for optimally packing all of the objects according to the initial heuristics and restraints ) which 1) maximizes a number of objects packed within the one or more customizable container configurations (see Gualtieri, paragraph 0010 teaching “load planning platform configured to use one or more combinatorial optimization algorithms to automate decisions on how to pack a set of items into a three-dimensional container or a set of three-dimensional containers based on real-world constraints (e.g., an order in which the items are to be loaded into and unloaded from the three-dimensional container(s), constraints associated with the three-dimensional container(s) such as available space and weight limits, sizes and weights of the items to be loaded into the three-dimensional container(s), whether certain items are fragile or robust, how to load the items in a way that can minimize a risk of the items shifting or falling over during transport, and/or the like)” and “the load planning platform can attempt to optimize (e.g., improve upon) the preliminary packing solution by applying one or more moves that allow for unpacked items to be placed into the three-dimensional container, packed items to be rotated, swapped, relocated, and/or the like” such that here a number of objects packed is maximized by filling space as much as possible until some end condition, and as in paragraph 0017 “load planning platform can select a final (e.g., most efficient or most optimal) packing solution from the set of packing solutions based on one or more optimization criteria, and the packing plan can correspond to the final packing solution that is selected based on the one or more optimization criteria” where as further explained in paragraphs 0024-0025 teaching to use a maximum amount of space available for filling the objects where “constraints with respect to effectively using space within the three-dimensional container (e.g., using as much space as possible while also minimizing risks of damage or injury to the items, a vehicle used to transport the items, people present in the vehicle, and/or the like)” and “the load planning platform can attempt to optimize (e.g., improve upon) the preliminary packing solution by identifying a set of available moves among the packed items that have a simulated placement in the three-dimensional container and/or unpacked items that were left out by the heuristic technique. For example, in some implementations, the set of available moves can include one or more of rotating an item horizontally, rotating an item vertically provided that the item is deemed robust or not fragile, swapping a placement for a given pair of items, moving one or more items to a free, unoccupied, or otherwise available space (e.g., a space where no other item is present, an available space on top of one or more other items, and/or the like), inserting one or more items that were left out by the heuristic technique (if any) into the three-dimensional container if there is sufficient space to do so, and/or the like. Accordingly, to optimize (e g, improve upon) the preliminary packing solution, the load planning platform can identify one or more available moves that can be applied to the preliminary packing solution” such that here for example a number of objects may be maximized by determining additional space for objects “if any” “available space” is determined where more objects could go, thus maximizing the number of objects in the container configurations) and/or 2) maximizes a number of priority objects packed within the one or more customizable container configurations, the number of priority objects less than a total number of objects included in the equipment set (see Gualtieri, paragraph 0031 teaching “the user interface elements can also allow the user to modify the sequence in which the items are to be loaded into and unloaded from the three-dimensional container, which can result in the load planning platform automatically recomputing the packing solution according to the modified sequence” such that a user can maximize priority of some object as prioritized to be unloaded last or be located in some egress location which rearranges the packing solutions and for example if selecting only one object for such placement then this is less than the total number in the equipment set); loading the embarkation data files and associated digital twin data representations onto a network server or stand-alone computer (see Gualtieri, paragraphs 0012-0016 teaching “load planning platform, one or more packing data sources, one or more client devices, a three-dimensional container, and a set of items to be loaded into the three-dimensional container” and “load planning platform can determine various parameters and/or constraints associated with the three-dimensional container and the set of items to be loaded into the three-dimensional container based on information received from the one or more packing data sources” where as in paragraphs 0035-0048 and figure 2 the “load planning platform” is one that includes “storing” of such data where “Load planning platform 230 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with load planning for a three-dimensional container. For example, load planning platform 230 can receive information related to loading constraints for the three-dimensional container and parameters for a set of items to be loaded into the three-dimensional container and generate a preliminary packing solution based on the received information in combination with information related to a sequence in which the set of items are to be unloaded from the three-dimensional container. Load planning platform 230 can further generate a set of packing solutions by applying, to the preliminary packing solution, one or more available moves that change a simulated placement in the three-dimensional container for one or more items in the set of items. Accordingly, load planning platform 230 can select a final packing solution from the set of packing solutions based on one or more optimization criteria and provide client device 210, vehicle device 220, and/or another suitable device with access to a three-dimensional rendering of the final packing solution and instructions for implementing the final packing solution based on the sequence in which the set of items are to be unloaded from the three-dimensional container” and as in paragraphs 0035-0048 given that the load planning platform can be located at a network server running the load planning platform where “environment 200 can include a client device 210, a vehicle device 220, a load planning platform 230 in a cloud computing environment 240 that includes one or more computing resources 245, a network 250, and/or the like. Devices of environment 200 can interconnect via wired connections” such that this cloud computing environment is a network server where such files and data can be loaded); using an augmented reality (AR) tablet and/or AR headset, a user accessing the embarkation data files and associated digital twin data representations to perform a digital pack out of the associated equipment set, wherein the user views and modifies, if necessary, a 3 dimensional (3D) map of the location of each object of the equipment set within the container (see Gualtieri, paragraphs 0030-0034 teaching “the load planning platform can generate a three-dimensional rendering of the best packing solution and further generate a plan for loading and unloading the three-dimensional container to implement the best packing solution” and “the three-dimensional rendering can show a placement of each item within the three-dimensional container, with the various items marked, labeled, colored, or otherwise differentiated based on the sequence in which the items are to be loaded into and unloaded from the three-dimensional container. For example, a set of items to be unloaded first can be rendered in a first color, with a first patterned overlay, and/or the like, a second a set of items to be unloaded next can be rendered in a second color, with a second patterned overlay, and/or the like, and a set of items to be unloaded last can be rendered in a different color, with a different patterned overlay, and/or the like” and “the plan for loading and unloading the three-dimensional container can comprise…augmented reality content for guiding personnel through loading the set of items into the three-dimensional container, and/or the like and “a client device having augmented reality and/or mixed reality capabilities (e.g., a smart phone, an optical see-through head mounted display, and/or the like) can be directed towards the set of items to be loaded into the three-dimensional container, and digital content can be rendered to draw attention to the first set of items to be loaded into the three-dimensional container (e.g., via a colored or patterned overlay, a billboard, and/or the like). The digital content can further indicate where to place the first set of items within the three-dimensional container, and upon confirming that the first set of items have been correctly placed, this sequence can repeat until all items have been loaded into the three-dimensional container” such that here the user may use an AR headset to perform a digital pack out by digitally packing out or packing or filling the storage container according to the embarkation data files and the user is able to both view and modify a 3D map of the location of each object of the equipment set within the container such as during the pack out and such 3D map may be modified into a final packing solution that would modify any 3D map presented to the user as where “the user interface elements can also allow the user to modify the sequence in which the items are to be loaded into and unloaded from the three-dimensional container, which can result in the load planning platform automatically recomputing the packing solution according to the modified sequence”; note that the user may always view the initial solution and the selected optimal solution and if such solution is acceptable then in such a case the viewing of the 3D map need not optionally include modifying such a map and the user may use the viewing controls to view the 3D map of the location of each object); and the user exporting to the embarkation system a final 3D map of the location of each object of the equipment set within the container and exporting any modified embarkation data files and any modified associated digital twin data representations to the embarkation file database and digital twin database, respectively (see Gualtieri, paragraphs 0012-0016 teaching “load planning platform can determine various parameters and/or constraints associated with the three-dimensional container and the set of items to be loaded into the three-dimensional container based on information received from the one or more packing data sources” where as in paragraphs 0035-0048 and figure 2 the “Load planning platform 230 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with load planning for a three-dimensional container. For example, load planning platform 230 can receive information related to loading constraints for the three-dimensional container and parameters for a set of items to be loaded into the three-dimensional container and generate a preliminary packing solution based on the received information in combination with information related to a sequence in which the set of items are to be unloaded from the three-dimensional container. Load planning platform 230 can further generate a set of packing solutions by applying, to the preliminary packing solution, one or more available moves that change a simulated placement in the three-dimensional container for one or more items in the set of items. Accordingly, load planning platform 230 can select a final packing solution from the set of packing solutions based on one or more optimization criteria and provide client device 210, vehicle device 220, and/or another suitable device with access to a three-dimensional rendering of the final packing solution and instructions for implementing the final packing solution based on the sequence in which the set of items are to be unloaded from the three-dimensional container” such that here the embarkation has all data input by the user exported to the load planning platform portion of the embarkation system including interactions and modifications such that any modification of the files and associated digital twin data is done the load planning platform based on the user inputs and as in paragraphs 0030-0034 ““load planning platform, one or more packing data sources, one or more client devices, a three-dimensional container, and a set of items to be loaded into the three-dimensional container” and “load planning platform can determine various parameters and/or constraints associated with the three-dimensional container and the set of items to be loaded into the three-dimensional container based on information received from the one or more packing data sources” where as in paragraphs 0035-0048 and figure 2 the “load planning platform” is one that includes “storing” of such data where “Load planning platform 230 includes one or more devices capable of receiving, generating, storing, processing, and/or providing information associated with load planning for a three-dimensional container. For example, load planning platform 230 can receive information related to loading constraints for the three-dimensional container and parameters for a set of items to be loaded into the three-dimensional container and generate a preliminary packing solution based on the received information in combination with information related to a sequence in which the set of items are to be unloaded from the three-dimensional container. Load planning platform 230 can further generate a set of packing solutions by applying, to the preliminary packing solution, one or more available moves that change a simulated placement in the three-dimensional container for one or more items in the set of items. Accordingly, load planning platform 230 can select a final packing solution from the set of packing solutions based on one or more optimization criteria and provide client device 210, vehicle device 220, and/or another suitable device with access to a three-dimensional rendering of the final packing solution and instructions for implementing the final packing solution based on the sequence in which the set of items are to be unloaded from the three-dimensional container” such that here the final packing solution and 3D map of such solution as determined above can be provided from the load planning platform to a client device such that the data received by such client device is the data exported as the final solution to the load planning platform part of the embarkation system). Regarding claim 2, Gualtieri teaches all that is required as applied to claim 1 above and further teaches wherein a packer uses a rendering of the final 3d map of the location of each object of the equipment set within the container to pack the equipment set with the container (see Gualtieri, paragraphs 0030-0035 teaching “load planning platform can generate a three-dimensional rendering of the best packing solution and further generate a plan for loading and unloading the three-dimensional container to implement the best packing solution. In particular, as shown in FIG. 1D, the three-dimensional rendering can show a placement of each item within the three-dimensional container, with the various items marked, labeled, colored, or otherwise differentiated based on the sequence in which the items are to be loaded into and unloaded from the three-dimensional container” and “the plan for loading and unloading the three-dimensional container can comprise … augmented reality content for guiding personnel through loading the set of items into the three-dimensional container, and/or the like” where such personnel would be a packer that uses such a rendering of the final 3D map ). Regarding claim 3, as rendered definite as explained above, Gualtieri teaches all that is required as applied to claim 1 above and further teaches wherein the embarkation system retrieves a digital twin of one or more of the objects from the digital twin database, wherein a digital twin of one or more of the objects is provided by an external process note that the claim does not define or limit what is meant by an “external process” nor is it apparent from any context what would necessarily make a process “external” in relation to another process or function as for example simply performing a function or process in relation to a different function or process could be seen as a process external, or not within some other function and thus for example an external process is any process which can be seen to be different or not within another process; see Gualtieri, paragraph 0016 teaching “the packing data sources that provide the information related to the parameters associated with the set of items to be loaded into the three-dimensional container can include any suitable device or combination of devices that can capture or provide such information. For example, the packing data sources can include a desktop computer, a laptop computer, a smartphone, and/or a similar device that can obtain information related to dimensions, shapes, weights, and/or the like for the set of items via user input, from a storage device, and/or the like and convey the parameters related to the set of items to the load planning platform. Additionally, or alternatively, an augmented reality device, a mixed reality device, a hand-held laser scanner, a structured light scanner, and/or another suitable device having appropriate three-dimensional scanning capabilities can be used to collect data relating to the size and shape of the items to be loaded into the three-dimensional container. Additionally, or alternatively, one or more of the items can be provided with a printed label, a barcode, a marking, a radio frequency identification (RFID) tag, and/or the like, which can be used to obtain the information related to the size, shape, and weight of the corresponding item(s)” such that here the load planning platform retrieves a digital twin from a storage device based on parameters which may be obtained from scanning items in combination with other external processes leading to the parameters of the digital twin being obtained for the 3D modeling packing problem ). Regarding claim 4, Gualtieri teaches all that is required as applied to claim 1 above and further teaches wherein the plurality of objects are one or more of pieces of equipment, and bins including pieces of equipment and/or provisions (see Gualtieri, paragraphs 0015-0016 teaching “the load planning platform can receive, from the one or more packing data sources, information relating to various parameters associated with the set of items to be loaded into the three-dimensional container. For example, the parameters associated with the set of items to be loaded into the three-dimensional container can include a quantity of items to be loaded into the three-dimensional container, a size (e.g., a length, width, and height) and/or shape of the items to be loaded into the three-dimensional container, a weight of the items to be loaded into the three-dimensional container, and/or the like. Furthermore, in some implementations, one or more of the items can be associated with a robustness parameter, which can indicate whether a given item is durable or fragile, sturdy or flimsy, able or unable to support weight, and/or the like. In some implementations, the set of items to be loaded into the three-dimensional container can be organized into one or more groups (e.g., items that have the same or similar sizes or shapes, items that are to be unloaded at a single stop, and/or the like)” where the items have “dimensions, shapes, weights, and or the like for the set of items” and see paragraph 0030 teaching “as shown in FIG. 1D, the three-dimensional rendering can show a placement of each item within the three-dimensional container, with the various items marked, labeled, colored, or otherwise differentiated based on the sequence in which the items are to be loaded into and unloaded from the three-dimensional container. For example, a set of items to be unloaded first can be rendered in a first color, with a first patterned overlay, and/or the like, a second a set of items to be unloaded next can be rendered in a second color, with a second patterned overlay, and/or the like, and a set of items to be unloaded last can be rendered in a different color, with a different patterned overlay, and/or the like” such that here the items are pieces of equipment and once received they will equip the receiver with such item and such items are in bins including pieces of equipment and/or provisions as seen in figure 1D where items are in boxes which are bins for example ). Regarding claim 5, Gualtieri teaches all that is required as applied to claim 1 above and further wherein the LiDAR scanner identifies one or more of the plurality of objects and the embarkation system retrieves a digital twin of the identified one or more objects from the digital twin database (see Gualtieri, paragraph 0016 teaching “the packing data sources that provide the information related to the parameters associated with the set of items to be loaded into the three-dimensional container can include any suitable device or combination of devices that can capture or provide such information” and “packing data sources can include a desktop computer, a laptop computer, a smartphone, and/or a similar device that can obtain information related to dimensions, shapes, weights, and/or the like for the set of items via user input, from a storage device, and/or the like and convey the parameters related to the set of items to the load planning platform” and “an augmented reality device, a mixed reality device, a hand-held laser scanner, a structured light scanner, and/or another suitable device having appropriate three-dimensional scanning capabilities can be used to collect data relating to the size and shape of the items to be loaded into the three-dimensional container. Additionally, or alternatively, one or more of the items can be provided with a printed label, a barcode, a marking, a radio frequency identification (RFID) tag, and/or the like, which can be used to obtain the information related to the size, shape, and weight of the corresponding item(s)” such as explained above a LiDAR scanner may be used to acquire information about the objects and this may include obtaining such information “from a storage device” where such information “can be used to obtain the information related to the size, shape, and weight of the corresponding items”; note also that more broadly, additionally and alternatively as it has been explained above that the LiDAR scanner identifies one or more of the objects, the embarkation system retrieving a digital twin of the identified objects as recited is not actually tied to such LiDAR scanning step specifically as such retrieving of a digital twin could take place at any point, thus see paragraph 0017 teaching “the load planning platform can generate a packing plan based on the information relating to the constraints associated with the three-dimensional container, the parameters related to the items to be loaded into the three-dimensional container, and a sequence in which the set of items are to be unloaded from the three-dimensional container (e.g., based on a delivery route or delivery sequence, which can be input by a user, obtained from a route planning device, and/or the like)” such that again given that the computing resources are in a networked server environment, then when such digital twin information is retrieved at any time by the embarkation system then the limitation is met). Regarding claim 7, Gualtieri teaches all that is required as applied to claim 1 above and further teaches wherein a packer uses a rendering of the final 3D map of the location of each object of the equipment set within the container to pack the equipment set with the container, and the embarkation system is configured to provide a visualization to the packer using the AR computer tablet and/or AR headset to provide an overlay of the final 3D map of the location of each object and an indication of the physical placement of objects within the container as they are packed, relative to a desired location of each object as indicated by the final 3D map of the location of each object (see Gualtieri, paragraphs 0030-0035 teaching “load planning platform can generate a three-dimensional rendering of the best packing solution and further generate a plan for loading and unloading the three-dimensional container to implement the best packing solution. In particular, as shown in FIG. 1D, the three-dimensional rendering can show a placement of each item within the three-dimensional container, with the various items marked, labeled, colored, or otherwise differentiated based on the sequence in which the items are to be loaded into and unloaded from the three-dimensional container” and “the plan for loading and unloading the three-dimensional container can comprise … augmented reality content for guiding personnel through loading the set of items into the three-dimensional container, and/or the like” and “the instructions can be provided as augmented reality content for guiding personnel through loading the set of items into the three-dimensional container. For example, a client device having augmented reality and/or mixed reality capabilities (e.g., a smart phone, an optical see-through head mounted display, and/or the like) can be directed towards the set of items to be loaded into the three-dimensional container, and digital content can be rendered to draw attention to the first set of items to be loaded into the three-dimensional container (e.g., via a colored or patterned overlay, a billboard, and/or the like). The digital content can further indicate where to place the first set of items within the three-dimensional container, and upon confirming that the first set of items have been correctly placed, this sequence can repeat until all items have been loaded into the three-dimensional container” such that here the personnel is a packer using an AR headset using the rendered final 3D map and provides the claimed overlays to show placement indications as items are packed relative to the map). Regarding claim 8, Gualtieri teaches all that is required as applied to claim 7 above and further teaches wherein if an object collides with another object or the container at the desir
Read full office action

Prosecution Timeline

Jun 17, 2024
Application Filed
Dec 12, 2025
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561816
MOTION CAPTURE USING CONCAVE REFLECTOR STRUCTURES
2y 5m to grant Granted Feb 24, 2026
Patent 12561845
DISTORTION INFORMATION FOR EACH ITERATION OF VERTICES RECONSTRUCTION
2y 5m to grant Granted Feb 24, 2026
Patent 12524957
METHOD OF GENERATING THREE-DIMENSIONAL MODEL AND DATA PROCESSING DEVICE PERFORMING THE SAME
2y 5m to grant Granted Jan 13, 2026
Patent 12518408
VIDEO-BASED TRACKING SYSTEMS AND METHODS
2y 5m to grant Granted Jan 06, 2026
Patent 12519919
METHOD AND SYSTEM FOR CONVERTING SINGLE-VIEW IMAGE TO 2.5D VIEW FOR EXTENDED REALITY (XR) APPLICATIONS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
69%
Grant Probability
81%
With Interview (+12.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month