Prosecution Insights
Last updated: April 19, 2026
Application No. 18/451,791

VESSEL LOADING WITH GUIDANCE USING AN AUGMENTED REALITY HEADSET

Non-Final OA §102§103
Filed
Aug 17, 2023
Examiner
AUGUSTIN, MARCELLUS
Art Unit
2682
Tech Center
2600 — Communications
Assignee
Inxeption Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
684 granted / 838 resolved
+19.6% vs TC avg
Strong +16% interview lift
Without
With
+15.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
869
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
18.5%
-21.5% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 838 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Filed IDS of 08/29/2023 has been received and considered. Claims 1-20 are currently pending. Please refer to the action below. Examiner Notes The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. However, the claimed subject matter, not the specification, is the measure of the invention. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 9, 11-12, and 20 rejected under 35 U.S.C. 102(a)(1) as being unpatentable, based upon a public use or sale or other public availability of the invention, over Kumar et al. (EP 3702985, A1). Regarding claim 1, Kumar teaches a method implemented by one or more processors (Augmented reality methods and systems of at least Figs. 1A-1I and Fig. 3 depicts an implemented method by one or more processors further utilizing AR glasses visualizations for at least arranging one or more packages inside a cargo container), the method comprising: identifying a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 1A-1I a heterogeneous plurality of packages and further in Figs. 1G-1J a cargo vessel into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identifying an arrangement of the heterogeneous plurality of packages within an interior of the vessel (identifying further in at least Figs. 1G, and 1I a cited configuration arrangement of the heterogeneous plurality of packages within an interior of the vessel); and for each package in the heterogeneous plurality of packages: performing object recognition on the package, using an augmented reality (AR) headset (using an augmented reality (AR) glass of the disclosure, performed object recognition on the package for each package in the heterogeneous plurality of packages); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, providing, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (further providing in at least Figs 1G and 1I and the disclosure visual position annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel). Regarding claim 9 (according to claim 1), Kumar further teaches wherein for each package in the heterogeneous plurality of packages, the visual annotation corresponding to the package overlays one or more digital images depicting the interior of the vessel (displaying a loaded next overlaid information of further Fig. 1I); wherein the visual annotation conveys one or more aspects of the position for the package in the arrangement of the heterogeneous plurality of packages within the interior of the vessel (Fig. 1I). Regarding claim 11 (according to claim 1), Kumar further teaches wherein the vessel is a shipping container, a trailer, a box truck, a panel truck, a cargo van, an aircraft, a ship, or a railcar (shipping containers of further Figs. 1G and 1I-1J). Regarding claim 12, Kumar teaches in at least Fig. 3 a computer program product 330 comprising one or more non-transitory computer- readable storage media having program instructions collectively stored on the one or more computer-readable storage media, the program instructions executable to: identify a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 1A-1I a heterogeneous plurality of packages and further in Figs. 1G-1J a cargo vessel into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identify an arrangement of the heterogeneous plurality of packages within an interior of the vessel (identifying further in at least Figs. 1G, and 1I a cited configuration arrangement instruction of the heterogeneous plurality of packages within an interior of the vessel); and and for each package in the heterogeneous plurality of packages: perform object recognition on the package, using an augmented reality (AR) headset (using an augmented reality (AR) glass of the disclosure, performed object recognition on the package for each package in the heterogeneous plurality of packages); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, provide, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (further providing in at least Figs 1G and 1I and the disclosure visual position annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel). Regarding claim 20, Kumar teaches a system of Figs. 2-3 comprising: a processor 320, a computer-readable memory 330, one or more computer-readable storage media, and program instructions collectively stored on the one or more computer-readable storage media, the program instructions executable to: identify a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 1A-1I a heterogeneous plurality of packages and further in Figs. 1G-1J a cargo vessel into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identify an arrangement of the heterogeneous plurality of packages within an interior of the vessel (identifying further in at least Figs. 1G, and 1I a cited configuration arrangement instruction of the heterogeneous plurality of packages within an interior of the vessel); and and for each package in the heterogeneous plurality of packages: perform object recognition on the package, using an augmented reality (AR) headset (using an augmented reality (AR) glass of the disclosure, performed object recognition on the package for each package in the heterogeneous plurality of packages); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, provide, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (further providing in at least Figs 1G and 1I and the disclosure visual position annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel). Claims 1-5, 7-16, and 18-20 are further rejected under 35 U.S.C. 102 as being unpatentable over Soles et al. (US 11514386, A1). Regarding claim 1, Soles teaches a method implemented by one or more processors (Fig. 1 depicts an Augmented reality methods and systems 120 implemented by one or more processors 121 for at least arranging one or more packages inside a vessel container of at least Fig. 12), the method comprising: identifying a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 4-8 and 12 a plurality of packages which as depicted in at least Fig. 12 further includes heterogeneous packages and an identified truck vessel container into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identifying an arrangement of the heterogeneous plurality of packages within an interior of the vessel (identifying arrangement of further Figs. 4-8 and 12); and for each package in the heterogeneous plurality of packages: performing object recognition on the package, using an augmented reality (AR) headset (using an augmented reality (AR) headset 120 of further Figs. 4-8 and 12 for performing object recognition on the package); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, providing, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (further providing in at least Fig. 4-8 and 12 visual annotation 600 and 1200, interior volume image data 1202 of the interior of the shipping vessel and visual/auditory overlays to further assist in recognition and arrangement of the corresponding package, further indicating visual annotation indicating a position for the package within the interior of the vessel). Regarding claim 2 (according to claim 1), Soles further teaches wherein for each package in the heterogeneous plurality of packages, performing object recognition on the package is in response to a gaze of a user wearing the AR headset being directed toward the package (the system further includes “object detection sensors may work in connection with other sensors that detect, for example, head movement, field of vision, gaze, orientation, location, and the like” for recognizing at least a user gaze and further Figs. 4-8 and 12 identifying package’s and recognition of packages placement location which further implies said system obviously configured for each package in the heterogeneous plurality of packages, performing object recognition on the package capably obviously in response to a gaze of a user wearing the AR headset being directed toward the package). Regarding claim 3 (according to claim 1), Soles further teaches wherein for each package in the heterogeneous plurality of packages, using the AR headset to provide the visual annotation corresponding to the package is in response to a gaze of a user wearing the AR headset being directed toward the interior of the vessel (the system of further Figs. 4-8 and 12 further depicts, using the AR headset, provided visual annotation corresponding to the package and further illustrates “object detection sensors may work in connection with other sensors that detect, for example, head movement, field of vision, gaze, orientation, location, and the like” for recognizing at least a user gaze which further implies said system obviously configured for each package in the heterogeneous plurality of packages, provide said visual annotation corresponding to the package is obviously in a case in response to a gaze of a user wearing the AR headset being directed toward the interior of the vessel). Regarding claim 4 (according to claim 3), Soles further teaches wherein for each package in the heterogeneous plurality of packages, using the AR headset to provide the visual annotation corresponding to the package is further in response to receiving user input indicating a selection of the package (visual annotations of further Figs. 4-8 and 12 in response to receiving user input, gaze or the like indicating a selection of the package). Regarding claim 5 (according to claim 4), Soles further teaches wherein the user input indicating the selection of the package comprises a wink or a blink( “object detection sensors may work in connection with other sensors that detect, for example, head movement, field of vision, gaze, orientation, location, and the like” for recognizing at least a user gaze further includes in the art a configuration of monitoring user eyelids movement or blinking as at least inputs for obviously indicating the selection of the package). Regarding claim 7 (according to claim 1), Soles further teaches wherein: the interior of the vessel is logically divided into a three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation indicates a particular logical cell in the 3D array of logical cells into which the package is to be loaded (the disclosure further depicts at least in Fig. 12 an interior shipping container volume visualization 1202 and a cited “three-dimensional shape defining the space inside of which objects may be placed. For example, according to some embodiments, a packing area comprises a two-dimensional surface having a perimeter that defines the area inside of which objects are to be placed” further indicative of said three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation of further Figs. 4-8 and 12 indicates a particular logical cell in the 3D array of logical cells into which the package is to be loaded). Regarding claim 8 (according to claim 1), Soles further teaches wherein the arrangement of the heterogeneous plurality of packages comprises a plurality of layers of the packages (packaging arrangement means of the packages of further Figs. 4-8 and 12 inside the interior of the shipping container comprises inherently a plurality of layers of the packages for safely and properly packing or loading said packages for safety reasons, breakability and the like). Regarding claim 9 (according to claim 1), Soles further teaches wherein for each package in the heterogeneous plurality of packages, the visual annotation corresponding to the package overlays one or more digital images depicting the interior of the vessel (visual annotations 600/1200 of further Figs. 4-8 and 12); wherein the visual annotation conveys one or more aspects of the position for the package in the arrangement of the heterogeneous plurality of packages within the interior of the vessel (Figs. 4-8 and 12). Regarding claim 10 (according to claim 1), Soles further teaches wherein further comprising, for each package in the heterogeneous plurality of packages, based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, rendering audio output corresponding to the package, the audio output indicating the position for the package within the interior of the vessel (further rendering in the disclosure auditory output in addition to rendering visual annotation corresponding to the package overlays one or more digital images depicting the interior of the vessel as at least audio output corresponding to the package, the audio output indicating the position for the package within the interior of the vessel). Regarding claim 11 (according to claim 1), Soles further teaches wherein the vessel is a shipping container, a trailer, a box truck, a panel truck, a cargo van, an aircraft, a ship, or a railcar (Fig12 further depict a shipping container truck). Regarding claim 12, Soles teaches in at least Fig. 1 a computer program product 122 comprising one or more non-transitory computer- readable storage media having program instructions collectively stored on the one or more computer-readable storage media, the program instructions executable to: identify a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 4-8 and 12 a plurality of packages which as depicted in at least Fig. 12 further includes heterogeneous packages and an identified truck vessel container into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identify an arrangement of the heterogeneous plurality of packages within an interior of the vessel (identifying arrangement of further Figs. 4-8 and 12); and for each package in the heterogeneous plurality of packages: perform object recognition on the package, using an augmented reality (AR) headset (using an augmented reality (AR) headset 120 of further Figs. 4-8 and 12 for performing object recognition on the package); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, provide, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (further providing in at least Fig. 4-8 and 12 visual annotation 600 and 1200, interior volume image data 1202 of the interior of the shipping vessel and visual/auditory overlays to further assist in recognition and arrangement of the corresponding package, further indicating visual annotation indicating a position for the package within the interior of the vessel). Regarding claim 13 (according to claim 12), Soles further teaches wherein for each package in the heterogeneous plurality of packages, performing object recognition on the package is in response to a gaze of a user wearing the AR headset being directed toward the package (the system further includes “object detection sensors may work in connection with other sensors that detect, for example, head movement, field of vision, gaze, orientation, location, and the like” for recognizing at least a user gaze and further Figs. 4-8 and 12 identifying package’s and recognition of packages placement location which further implies said system obviously configured for each package in the heterogeneous plurality of packages, performing object recognition on the package capably obviously in response to a gaze of a user wearing the AR headset being directed toward the package). Regarding claim 14 (according to claim 12), Soles further teaches wherein for each package in the heterogeneous plurality of packages, using the AR headset to provide the visual annotation corresponding to the package is in response to a gaze of a user wearing the AR headset being directed toward the interior of the vessel (the system of further Figs. 4-8 and 12 further depicts, using the AR headset, provided visual annotation corresponding to the package and further illustrates “object detection sensors may work in connection with other sensors that detect, for example, head movement, field of vision, gaze, orientation, location, and the like” for recognizing at least a user gaze which further implies said system obviously configured for each package in the heterogeneous plurality of packages, provide said visual annotation corresponding to the package is obviously in a case in response to a gaze of a user wearing the AR headset being directed toward the interior of the vessel). Regarding claim 15 (according to claim 14), Soles further teaches wherein for each package in the heterogeneous plurality of packages, using the AR headset to provide the visual annotation corresponding to the package is further in response to receiving user input indicating a selection of the package (visual annotations of further Figs. 4-8 and 12 in response to receiving user input, gaze or the like indicating a selection of the package). Regarding claim 16 (according to claim 15), Soles further teaches wherein the user input indicating the selection of the package comprises a wink or a blink ( “object detection sensors may work in connection with other sensors that detect, for example, head movement, field of vision, gaze, orientation, location, and the like” for recognizing at least a user gaze further includes in the art a configuration of monitoring user eyelids movement or blinking as at least inputs for obviously indicating the selection of the package). Regarding claim 18 (according to claim 12), Soles further teaches wherein: the interior of the vessel is logically divided into a three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation indicates a particular logical cell in the 3D array of logical cells into which the package is to be loaded (the disclosure further depicts at least in Fig. 12 an interior shipping container volume visualization 1202 and a cited “three-dimensional shape defining the space inside of which objects may be placed. For example, according to some embodiments, a packing area comprises a two-dimensional surface having a perimeter that defines the area inside of which objects are to be placed” further indicative of said three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation of further Figs. 4-8 and 12 indicates a particular logical cell in the 3D array of logical cells into which the package is to be loaded). Regarding claim 19 (according to claim 12), Soles further teaches wherein the arrangement of the heterogeneous plurality of packages comprises a plurality of layers of the packages (packaging arrangement means of the packages of further Figs. 4-8 and 12 inside the interior of the shipping container comprises inherently a plurality of layers of the packages for safely and properly packing or loading said packages for safety reasons, breakability and the like). Regarding claim 20, Soles teaches in at least Fig. 1 a system comprising: a processor 121, a computer-readable memory 122, one or more computer-readable storage media, and program instructions collectively stored on the one or more computer-readable storage media, the program instructions executable to: identify a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 4-8 and 12 a plurality of packages which as depicted in at least Fig. 12 further includes heterogeneous packages and an identified truck vessel container into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identify an arrangement of the heterogeneous plurality of packages within an interior of the vessel (identifying arrangement of further Figs. 4-8 and 12); and for each package in the heterogeneous plurality of packages: perform object recognition on the package, using an augmented reality (AR) headset (using an augmented reality (AR) headset 120 of further Figs. 4-8 and 12 for performing object recognition on the package); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, provide, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (further providing in at least Fig. 4-8 and 12 visual annotation 600 and 1200, interior volume image data 1202 of the interior of the shipping vessel and visual/auditory overlays to further assist in recognition and arrangement of the corresponding package, further indicating visual annotation indicating a position for the package within the interior of the vessel). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 6, and 17 is/are rejected under 35 U.S.C. 103 as being obvious over Soles in view in view of Nemoto et al (JP 2016147736, A1). Regarding claim 6 (according to claim 1), Soles is silent regarding wherein further comprising: based on a gaze of a user wearing the AR headset being directed toward two or more packages of the heterogeneous plurality of packages, the two or more packages being ready to load into the interior of the vessel: identifying, from the two or more packages, a particular package to be loaded next into the interior of the vessel; and providing, using the AR headset, a visual annotation indicating the particular package to be loaded next into the interior of the vessel. Nemoto teaches the worn device 20 adapted to receive and identify a next container/package to pick up next for loading in a container or the like, from at least two or more packages being ready to load into the interior of the vessel: identifying, from the two or more packages, a particular package to be loaded next inside the vessel or container and providing, using the mobile device, a visual signal indicating the particular package to be loaded next. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Soles in view of Nemoto to include based on a gaze of a user wearing the AR headset being directed toward two or more packages of the heterogeneous plurality of packages, the two or more packages being ready to load into the interior of the vessel: identifying, from the two or more packages, a particular package to be loaded next into the interior of the vessel; and providing, using the AR headset, a visual annotation indicating the particular package to be loaded next into the interior of the vessel, as discussed above, as Soles in view of Nemoto are in the same of endeavor of identifying a package or container to be loaded next according to information displayed on a wearable or worn device, Nemoto further complements the worn AR headset of Soles with a supplemental visual annotation indicating the particular package to be loaded next into the interior of the vessel, which when added to AR headset of Soles further advantageously provide additional visual package placement or arrangement annotation data for further safely packaging said packages according to further known methods to yield predictable results since known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art as said combination is thus the adaptation of an old idea or invention using newer technology that is either commonly available and understood in the art thereby a variation on already known art (See MPEP 2143, KSR Exemplary Rationale F). Regarding claim 17 (according to claim 12), Soles is silent regarding wherein the program instructions further being executable to: based on a gaze of a user wearing the AR headset being directed toward two or more packages of the heterogeneous plurality of packages, the two or more packages being ready to load into the interior of the vessel: identify, from the two or more packages, a particular package to be loaded next into the interior of the vessel; and provide, using the AR headset, a visual annotation indicating the particular package to be loaded next into the interior of the vessel. Nemoto teaches the worn device 20 adapted to receive and identify a next container/package to pick up next for loading in a container or the like, from at least two or more packages being ready to load into the interior of the vessel: identifying, from the two or more packages, a particular package to be loaded next inside the vessel or container and providing, using the mobile device, a visual signal indicating the particular package to be loaded next. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Soles in view of Nemoto to include based on a gaze of a user wearing the AR headset being directed toward two or more packages of the heterogeneous plurality of packages, the two or more packages being ready to load into the interior of the vessel: identifying, from the two or more packages, a particular package to be loaded next into the interior of the vessel; and providing, using the AR headset, a visual annotation indicating the particular package to be loaded next into the interior of the vessel, as discussed above, as Soles in view of Nemoto are in the same of endeavor of identifying a package or container to be loaded next according to information displayed on a wearable or worn device, Nemoto further complements the worn AR headset of Soles with a supplemental visual annotation indicating the particular package to be loaded next into the interior of the vessel, which when added to AR headset of Soles further advantageously provide additional visual package placement or arrangement annotation data for further safely packaging said packages according to further known methods to yield predictable results since known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art as said combination is thus the adaptation of an old idea or invention using newer technology that is either commonly available and understood in the art thereby a variation on already known art (See MPEP 2143, KSR Exemplary Rationale F). Claims 1, 7-9, 11-12, and 18-20 are further rejected under 35 U.S.C. 102(a)(1) as being unpatentable, based upon a public use or sale or other public availability of the invention, over Gizatov et al. (WO 2019092411, A1). Regarding claim 1, Gizatov teaches a method implemented by one or more processors (Augmented reality methods and systems of at least Figs. 4B and 4D-6B implemented by one or more wellknown processors for at least arranging one or more packages inside a vessel container 1000), the method comprising: identifying a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 5 a plurality of packages 100-3 which as understood in the art may be obviously heterogeneous and an identified vessel or container 1000 into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identifying an arrangement of the heterogeneous plurality of packages within an interior of the vessel (based on identifying said plurality of packages 100-3 and the vessel 1000 of Figs. 4B and 6B, the system further identifying a coordinate and position arrangement utilizing “calculating a 3-dimensional coordinate of a position of a package inside an interior of a container… also include the extent of the volume occupied by the package 100” in addition to annotated packages indicator position 412/422 and further utilizing AR headset 420 for viewing said packages and the interior of the container 1000); and for each package in the heterogeneous plurality of packages: performing object recognition on the package, using an augmented reality (AR) headset (the device 420 of at least Figs. 5, 6B and 4B further configured for performing object recognition on the package 100-3, using an augmented reality (AR) headset goggle 420); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, providing, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (based on identifying the arrangement of the heterogeneous plurality of packages according to at least visual annotated pointers 412/422 and provided visual 3D map of the interior of the vessel as illustrated in the disclosure, and further based on performing object recognition on the package, providing, using the AR headset 420, a visual annotation 412/422 corresponding to the package 100-3, the visual annotation 412/422 indicating a position for the package within the interior of the vessel 1000). Regarding claim 7 (according to claim 1), Gizatov further teaches wherein : the interior of the vessel is logically divided into a three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation indicates a particular logical cell in the 3D array of logical cells into which the package is to be loaded (the enclosure further cites “displaying a 3-dimensional object corresponding to the combination of the determined surfaces, such that the 3-dimensional object represents the internal space of the container, overlaid with the displayed image of the interior of the container” further indicative of a three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation indicates a particular logical cell in the 3D array of logical cells into which the package may be obviously loaded as there is a case they are unloaded and have to be loaded back). Regarding claim 8 (according to claim 1), Gizatov further teaches wherein the arrangement of the heterogeneous plurality of packages comprises a plurality of layers of the packages (packaging arrangement means of the packages 100-3 of further Figs. 4-6 inside the interior of a container and/or a vessel comprises inherently a plurality of layers of the packages for safely and properly packing or loading said packages for safety reasons, breakability and the like). Regarding claim 9 (according to claim 1), Gizatov further teaches wherein for each package in the heterogeneous plurality of packages, the visual annotation corresponding to the package overlays one or more digital images depicting the interior of the vessel (top image of claim 6); wherein the visual annotation conveys one or more aspects of the position for the package in the arrangement of the heterogeneous plurality of packages within the interior of the vessel (visual annotation 412/422 of Figs. 4B and 6B). Regarding claim 11 (according to claim 1), Gizatov further teaches wherein the vessel is a shipping container, a trailer, a box truck, a panel truck, a cargo van, an aircraft, a ship, or a railcar (at least the vessel of Figs. 6B top is the interior of a shipping container). Regarding claim 12, Gizatov teaches a computer program product comprising one or more non-transitory computer- readable storage media having program instructions collectively stored on the one or more computer-readable storage media (the disclosure cites the at least AR device 420 comprises at least memory comprising at least program product), the program instructions executable to: identify a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 5 a plurality of packages 100-3 which as understood in the art may be obviously heterogeneous and an identified vessel or container 1000 into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identify an arrangement of the heterogeneous plurality of packages within an interior of the vessel (based on identifying said plurality of packages 100-3 and the vessel 1000 of Figs. 4B and 6B, the system further identifying a coordinate and position arrangement utilizing “calculating a 3-dimensional coordinate of a position of a package inside an interior of a container… also include the extent of the volume occupied by the package 100” in addition to annotated packages indicator position 412/422 and further utilizing AR headset 420 for viewing said packages and the interior of the container 1000); and for each package in the heterogeneous plurality of packages: perform object recognition on the package, using an augmented reality (AR) headset (the device 420 of at least Figs. 5, 6B and 4B further configured for performing object recognition on the package 100-3, using an augmented reality (AR) headset goggle 420); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, provide, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (based on identifying the arrangement of the heterogeneous plurality of packages according to at least visual annotated pointers 412/422 and provided visual 3D map of the interior of the vessel as illustrated in the disclosure, and further based on performing object recognition on the package, providing, using the AR headset 420, a visual annotation 412/422 corresponding to the package 100-3, the visual annotation 412/422 indicating a position for the package within the interior of the vessel 1000). Regarding claim 18 (according to claim 12), Gizatov further teaches wherein : the interior of the vessel is logically divided into a three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation indicates a particular logical cell in the 3D array of logical cells into which the package is to be loaded (the enclosure further cites “displaying a 3-dimensional object corresponding to the combination of the determined surfaces, such that the 3-dimensional object represents the internal space of the container, overlaid with the displayed image of the interior of the container” further indicative of a three-dimensional (3D) array of logical cells, and for each package in the heterogenous plurality of packages, the visual annotation indicates a particular logical cell in the 3D array of logical cells into which the package may be obviously loaded as there is a case they are unloaded and have to be loaded back). Regarding claim 19 (according to claim 12), Gizatov further teaches wherein the arrangement of the heterogeneous plurality of packages comprises a plurality of layers of the packages (packaging arrangement means of the packages 100-3 of further Figs. 4-6 inside the interior of a container and/or a vessel comprises inherently a plurality of layers of the packages for safely and properly packing or loading said packages for safety reasons, breakability and the like). Regarding claim 20, Gizatov teaches at least in Figs. 1 a system comprising: a processor (each of the device 410/420 of Figs. 4 comprises at least one processor 200), a computer-readable memory, one or more computer-readable storage media, and program instructions collectively stored on the one or more computer-readable storage media (each of the device 410/420 of Figs. 4 comprises at least one or more computer-readable memory, one or more computer-readable storage media, and program instructions), the program instructions executable to: identify a heterogeneous plurality of packages and a vessel into which the heterogeneous plurality of packages is to be loaded (identifying at least in Figs. 5 a plurality of packages 100-3 which as understood in the art may be obviously heterogeneous and an identified vessel or container 1000 into which the heterogeneous plurality of packages is to be loaded); based on identifying the heterogeneous plurality of packages and the vessel, identify an arrangement of the heterogeneous plurality of packages within an interior of the vessel (based on identifying said plurality of packages 100-3 and the vessel 1000 of Figs. 4B and 6B, the system further identifying a coordinate and position arrangement utilizing “calculating a 3-dimensional coordinate of a position of a package inside an interior of a container… also include the extent of the volume occupied by the package 100” in addition to annotated packages indicator position 412/422 and further utilizing AR headset 420 for viewing said packages and the interior of the container 1000); and for each package in the heterogeneous plurality of packages: perform object recognition on the package, using an augmented reality (AR) headset (the device 420 of at least Figs. 5, 6B and 4B further configured for performing object recognition on the package 100-3, using an augmented reality (AR) headset goggle 420); and based on identifying the arrangement of the heterogeneous plurality of packages and further based on performing object recognition on the package, provide, using the AR headset, a visual annotation corresponding to the package, the visual annotation indicating a position for the package within the interior of the vessel (based on identifying the arrangement of the heterogeneous plurality of packages according to at least visual annotated pointers 412/422 and provided visual 3D map of the interior of the vessel as illustrated in the disclosure, and further based on performing object recognition on the package, providing, using the AR headset 420, a visual annotation 412/422 corresponding to the package 100-3, the visual annotation 412/422 indicating a position for the package within the interior of the vessel 1000). Claim(s) 2-5, and 13-16 is/are rejected under 35 U.S.C. 103 as being obvious over Gizatov in view in view of Maltz et al (US 20160004306, A1). Regarding claim 2 (according to claim 1), Gizatov further teaches wherein for each package in the heterogeneous plurality of packages, performing object recognition on the package is in response However, Gizatov is silent regarding the above lined-out items such as wherein for said each package in the heterogeneous plurality of packages, performing said object recognition on the package is in response to a gaze of a user wearing the AR headset being directed toward the package. Maltz teaches at least in para. 0030, 0055, and 0069 a worn headset device with eye tracking mechanism to track a user gaze and detect user gazed objects and directions, the system further in para. 0055 and 0069 perform recognition of the gazed objects identification and position which objects recognition further pertain in para. 0144 to shipping packages obviously including heterogeneous plurality of packages being recognized in response in a case to a gaze of a user wearing the AR headset being directed toward the package. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gizatov in view of Maltz to include for said each package in the heterogeneous plurality of packages, performing said object recognition on the package is in response to a gaze of a user wearing the AR headset being directed toward the package, as discussed above, as Gizatov in view of Maltz are in the same of endeavor of performing object recognition on the package is in response according to a user worn AR headset being directed toward the package and for each of said package in the plurality of packages, Nemoto further complements the worn AR headset of Gizatov with a supplemental gaze tracking means, which when added to AR headset of Gizatov further advantageously provide an additional input means adapted to track a user gaze or eye fixations at certain packages for at least detecting or identifying a package and container interior position to place said package among the plurality of packages according to further conveyed overlaid information for further safely loading said packages according to further known methods to yield predictable results since known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art as said combination is thus the adaptation of an old idea or invention using newer technology that is either commonly available and understood in the art thereby a variation on already known art (See MPEP 2143, KSR Exemplary Rationale F). Regarding claim 3 (according to claim 1), Gizatov further teaches wherein for each package in the heterogeneous plurality of packages, using the AR headset to provide the visual annotation corresponding to the package annotation corresponding to the package is in response to a user wearing the AR headset being directed toward the interior of the vessel). However, Gizatov is silent regarding the above lined-out items such as wherein for said each package in the heterogeneous plurality of packages, using said AR headset to provide said visual annotation corresponding to the package is in response to a gaze of the said user wearing the AR headset being directed toward the interior of the vessel. Maltz teaches at least in para. 0030, 0055, and 0069 a worn headset device with eye tracking mechanism to track a user gaze and detect user gazed objects and directions, the system further in para. 0055 and 0069 perform recognition of the gazed objects identification and position which objects recognition further pertain in para. 0144 to shipping packages obviously including heterogeneous plurality of packages being recognized in response in a case to a gaze of a user wearing the AR headset being directed toward the package. It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Gizatov in view of Maltz to include for said each package in the heterogeneous plurality of packages, using said AR headset to provide said visual annotation corresponding to the package is in response to a gaze of the said user wearing the AR headset being directed toward the interior of the vessel, as discussed above, as Gizatov in view of Maltz are in the same of endeavor of performing object recognition on the package is in response according to a user worn AR headset being directed toward the package and for each of said package in the plurality of packages, Nemoto further complements the worn AR headset of Gizatov with a supplemental gaze tracking means, which when added to AR headset of Gizatov further advantageously provide an additional input means adapted to track a user gaze or eye fixations at certain packages for at least detecting or identifying a package and container interior position to place said package among the plurality of packages according to further conveyed overlaid information for further safely loading said packages according to further known methods to yield predictable results since known work in one field of endeavor may prompt variations of it for use in either the same field or a different one based on design incentives or other market forces if the variations are predictable to one of ordinary skill in the art as said combination is thus the adaptation of an old idea or invention using newer technology that is either commonly available and understood in the art thereby a variation on already known art (See MPEP 2143, KSR Exemplary Rationale F). Regarding claim 4 (according to claim 3), Gizatov further teaches wherein for each package in the heterogeneous plurality of packages, using the AR headset to provide the visual annotation corresponding to the package is further in response to receiving user input indicating a selection of the package (the enclosure cites a case of “receiving a user input selecting one of the packages displayed on the electronic device” of at least Fig. 6B for each package in the heterogeneous plurality of packages, using in a case the AR headset to provide the visual annotation corresponding to the package is further in response to receiving user input indicating a selection of the package). Regarding claim 5 (according to claim 4), Gizatov is silent regarding wherein the user input indicating the selection of the package comprises a wink or a blink. Maltz teaches at least in the disclosure a worn portable device 20 adapted to detect a blink input as at least an input indicating the selection
Read full office action

Prosecution Timeline

Aug 17, 2023
Application Filed
Nov 12, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597126
IMAGE SETTING DEVICE, IMAGE SETTING METHOD, AND IMAGE SETTING PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12586170
SYSTEM AND METHOD FOR GENERATING PREDICTIVE IMAGES FOR WAFER INSPECTION USING MACHINE LEARNING
2y 5m to grant Granted Mar 24, 2026
Patent 12573079
System and Method for Identifying Feature in an Image of a Subject
2y 5m to grant Granted Mar 10, 2026
Patent 12573388
BEHAVIOR DETECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12569129
ANATOMICAL LOCATION DETECTION OF FEATURES OF A GASTROINTESTINAL TRACT OF A PATIENT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+15.9%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 838 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month