DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is the First Office Action on the merits.
Claims 21-28, and 30 have been withdrawn from further consideration.
Claims 1-20, and 29 are currently pending and addressed below.
Election/Restrictions
Claims 21-28 and 30 are withdrawn from further consideration pursuant to 37 CFR 1.142(b) as being drawn to a nonelected invention, there being no allowable generic or linking claim. Election was made without traverse in the reply filed on 16 December 2025.
Applicant’s election without traverse of claims 1-20 and 29 in the reply filed on 16 December 2025 is acknowledged.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20, and 29 are rejected under 35 U.S.C. 101
Regarding claim 1:
Step 1: Statutory Category - Yes
The claim is directed toward an apparatus (e.g. “the network entity”) which falls within one of the four statutory categories. MPEP 2106.3.
Step 2A Prong 1: Judicial Exception – Yes
Independent claim 1 includes limitations that recites an abstract idea. The claim recites “select a first route from the initial position of the wireless device to at least a first location of the one or more locations based at least in part on the information indicative of the navigation environment and the information indicative of the estimated traversal distance, the first location proximate a first target asset of the one or more target assets;”, which given their broadest reasonable interpretation, the claim covers performance of the limitations in the human mind. For example, a human mind could reasonably determine/choose a route from a plurality of routes based on the given data (e.g. navigation environment and information indicative of the estimated traversal distance). As such, the claim recites at least one abstract idea.
Step 2A Prong 2: Practical Application – No
Claim 1 is evaluated whether as a whole it integrates the recited judicial exception into a practical application. As noted in the 2019 PEG, it must be determined whether any additional elements in the claim beyond the abstract idea integrate the exception into a practical application in a manner that imposes a meaningful limit on the judicial exception. The courts have indicated that additional elements merely using a computer to implement an abstract idea, adding insignificant extra solution activity, or generally linking use of a judicial except ion to a particular technological environment or field of use do not integrate a judicial exception into a “practical application”.
The claim does not include additional elements that are sufficient enough to amount to integrating the judicial exception into a practical application, for example, the claimed elements “obtain information indicative of a navigation environment and information indicative of an estimated traversal distance for a plurality of potential routes from an initial position of a wireless device to one or more locations proximate one or more target assets, wherein obtaining information indicative of the navigation environment for the plurality of potential routes comprises obtaining image data from one or more cameras positioned to monitor one or more segments of the plurality of potential routes” and “transmit, via the one or more transceivers, one or more navigation commands for the first route to the wireless device, one or more infrastructure devices, or a combination thereof” are recited at a high-level of generality and amount to mere pre- or post-solution actions, which is a form of extra-solution activity. Claim 1 recites the additional elements of “a network entity”, “one or more memories”, “one or more processors”, “one or more transceivers”, and “a wireless device” are merely tool(s) being used to perform to perform the abstract idea. The “a network entity”, “one or more memories”, “one or more processors”, “one or more transceivers”, and “a wireless device” are recited at a high-level of generality and amount to no more than mere instructions to apply the exception using a generic or general purpose computer. The components merely automate the aforementioned steps and thus do not integrate the judicial exception into a “practical application”. These additional elements can also be viewed as nothing more than an attempt to generally link the use of judicial exception to the technological environment of computers. See MPEP 2106.05(h).
Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
Step 2B:
Claim 1 is evaluated as to whether the claim as a whole amounts to significantly more
than the recited exception, i.e., whether any additional element, or combination of additional elements, adds an inventive concept to the claim.
The claim does not include additional elements that are sufficient enough to provide an
inventive concept in Step 2B, for example, the claimed elements “obtain information indicative of a navigation environment and information indicative of an estimated traversal distance for a plurality of potential routes from an initial position of a wireless device to one or more locations proximate one or more target assets, wherein obtaining information indicative of the navigation environment for the plurality of potential routes comprises obtaining image data from one or more cameras positioned to monitor one or more segments of the plurality of potential routes” and “transmit, via the one or more transceivers, one or more navigation commands for the first route to the wireless device, one or more infrastructure devices, or a combination thereof” are well-understood, routine and conventional activity in the art. See MPEP 2106.05(d), II, “The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information);”.
As discussed with respect to step 2A Prong 2, the additional elements of “a network entity”, “one or more memories”, “one or more processors”, “one or more transceivers”, and “a wireless device” are merely tool(s) being used to perform the abstract idea. The “a network entity”, “one or more memories”, “one or more processors”, “one or more transceivers”, and “a wireless device” are recited at a high-level of generality and amount to no more than mere instructions to apply the exception using a generic or general purpose computer. These additional elements can also be viewed as nothing more than an attempt to generally link the use of judicial exception to the technological environment of computers.
Accordingly, the claim is not patent eligible.
Regarding claim 29 , the claim recites a method which falls within at least one of the four statutory categories. Claim 29 recites similar limitations as indicated above with respect to claim 1. Hence, the claim is not eligible for the same reasons as discussed above with respect to claim 1. All other limitations not discussed are the same as those discussed above with respect to claim 1. Discussion is omitted for brevity.
Claims 2-20 are also rejected under 35 U.S.C. 101 by virtue of their dependency to the independent claims.
Claims 2-20 do not recite additional elements that integrate the judicial
exception into a practical application, because the additional elements are directed toward
additional aspects of judicial exception and/or well-understood, routine and conventional
additional elements that do not integrate the judicial exception into a practical application. For example, the limitations recited in claim 3 further the abstract idea.
The dependent claims are rejected under 35 U.S.C. 101 under similar rationale as their independent claims.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4, 7, 10-11, 13-16, 19, and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1).
Regarding claim 1, and similarly with respect to claim 29, Glaser et al. discloses A network
entity, comprising: one or more memories; one or more transceivers; and one or more processors communicatively coupled to the one or more memories and the one or more transceivers, the one or more processors, either alone or in combination, configured to: obtain information indicative of a navigation environment ([0105] “the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”, [0111] “The CV monitoring system 1210 will preferably include various computing elements used in processing image data collected by an imaging system.”, [0112] “The imaging system functions to collect image data within the environment. The imaging system preferably includes a set of image capture devices. The imaging system might collect some combination of visual, infrared, depth-based, lidar, radar, sonar, and/or other types of image data.”) and information indicative of an estimated traversal distance for a plurality of potential routes from an initial position of a wireless device to one or more locations proximate one or more target assets, wherein obtaining information indicative of the navigation environment for the plurality of potential routes comprises obtaining image data from one or more cameras positioned to monitor one or more segments of the plurality of potential routes; (Figures 2A-3, [0128] “The planogram mapping processor service 1300 functions to generate a product location map (e.g., a planogram). The product location map can indicate where in the environment particular items are located. This can be used to indicate where a product is stocked on a shelf in a store.”, [0216] “can enable and/or use a dynamic planogram generated as a result of tracing product scanning identification events with predicted locations of products (using the CV monitoring system) and using the resulting planogram to form a waypoint graph usable for route planning.”, [0217] “mapping, using a graph traversal process for route planning, agent path directions based on product location map P1120, which includes: determining a waypoint graph for a set of waypoints based on the product location map P1121, and determining the agent path directions by performing a graph traversal process of the waypoint graph P1122; and updating navigation system of an agent with the agent path directions P1130.”, and [0249] “the waypoint graph is a graph data structure with node-link mappings where the waypoints are represented as nodes and links are representations of travel scores. A travel score may be a measure of estimated travel time, travel distance, or other metrics. In some instances, the travel score may be a score that weighs multiple factors such as time, distance, complexity, risk of changing conditions, cart or robotic agent navigation challenges, and the like. Performing the graph traversal process can include performing a traveling salesperson process minimizing travel cost for navigating the waypoints.”) select a first route from the initial position of the wireless device to at least a first location of the one or more locations based at least in part on the information indicative of the navigation environment and the information indicative of the estimated traversal distance, the first location proximate a first target asset of the one or more target assets; ([0074] “a mobile robot, the robot can use an environment monitoring system and optionally unique planogram generation approach for more accurate navigation to a set of product locations.”, [0249] “the waypoint graph is a graph data structure with node-link mappings where the waypoints are represented as nodes and links are representations of travel scores. A travel score may be a measure of estimated travel time, travel distance, or other metrics. In some instances, the travel score may be a score that weighs multiple factors such as time, distance, complexity, risk of changing conditions, cart or robotic agent navigation challenges, and the like. Performing the graph traversal process can include performing a traveling salesperson process minimizing travel cost for navigating the waypoints.”, and [0284] “method can include updating order picking directions based on CV monitored state of the product location map. For example, detecting changing conditions may be used to determining an updated waypoint graph and updated agent path directions. This update may be further updated based on current status of the agent such as location of the agent and current path trajectory (is the agent following a recommended path, are they making progress along a path). This update may additionally factor in inventory conditions as indicated in the product location map. Changes in product availability or relocation of an item can be detected and automatically used to update directions.”)
transmit, via the one or more transceivers, one or more navigation commands for the first
route to the wireless device, one or more infrastructure devices, or a combination thereof. ([0105] “The mobile robotic agent 1110 may be any suitable type of robotic device or other computer-controlled device with a form of locomotion. The mobile robotic agent 1110 may include additional sensor systems and navigational control systems to facilitate moving through the environment. In one variation, the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”)
However, Glaser et al. may be alleged to not explicitly disclose obtaining image data from one or more cameras positioned to monitor one or more segments of the plurality of potential routes;
Khatravath et al. teaches obtaining image data from one or more cameras positioned to monitor one or more segments of the plurality of potential routes; ([0021] “distance calculation module 206 computes actual distance for each of the plurality of paths by converting pixel data associated with the plurality of paths captured from the camera feeds.”,0080] “identifying the traversal path is influenced by proper monitoring of the warehouse using analysis performed on live camera feeds captured by cameras installed within the warehouse.”, [0044] “To compute the task completion plan, at 402, a plurality of paths within the warehouse are identified that can be used to transport the article from the originating point to the destination point. Each path connects the originating point to the destination point. These plurality of paths are identified using the warehouse layout map and the analysis performed on the camera feeds.”, and [0083] “The decisions on identifying the traversal path is influenced by proper monitoring of the warehouse using analysis performed on live camera feeds captured by cameras installed within the warehouse. The method correlates live traffic information thus captured with various cost combinations of labor and vehicle/equipment, to suggest a cost effective path and employee or vehicle or a combination thereof for completion of a task.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. to incorporate image data associated with plurality of paths as taught by Khatravath et al. for the purpose of “identifying the traversal path is influenced by proper monitoring of the warehouse using analysis performed on live camera feeds captured by cameras installed within the warehouse…the warehouse management becomes more efficient resulting in a high time and cost saving.” ([0083], Khatravath et al.)
Regarding claim 4, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses wherein, to obtain the image data from the one or more cameras positioned to monitor the one or more segments of the plurality of potential routes, the one or more processors, either alone or in combination, are configured to receive image data obtained by one or more infrastructure devices, one or more mobile devices, or both, and further configured to: detect or predict one or more navigation impediments based at least in part on the image data; and ([0105] “The mobile robotic agent 1110 may include additional sensor systems and navigational control systems to facilitate moving through the environment. In one variation, the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”, and [0223] “the robotic agent, accordingly, may include, at the mobile robotic agent, sensing nearby environment conditions with an onboard sensor system of the remote robotic agent, and navigating the nearby environment based on the nearby environment conditions. In this way, navigating the mobile robotic agent based on the agent path directions and input from an obstacle avoidance system of the mobile robotic agent is further based on the nearby environment conditions. Navigating may perform operations such as automatically steering around or avoiding obstacles, slowing down around certain objects (e.g., people, children, animals, other mobile robotic agents, etc.)”)
obtain updated information indicative of the navigation environment recurrently, based at least in part on an occurrence of an event, based at least in part on a schedule, or a combination thereof. ([0129] “he CV monitoring system 1210 is preferably used in creating a digital planogram or product location map. The product location map can be a continuously update data model relating inventory labels to locations in the store.” , and [0409] “Generating product event location data may be performed in real-time in response to the occurrence of some event like a person moving through an environment, a person performing some action, the state of a product on a shelf changing, and/or any suitable state of the image data. If transaction data is additionally collected and processed in substantially real-time, a product location map can be updated with low latency (e.g., accurate as of 5-15 minutes).”)
Regarding claim 7, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses wherein the wireless device is included in an automated guided vehicle (AGV). ([0104] “may include different types of agent devices 1100 such as at least one mobile robotic agent 1110”, and [0105] “The mobile robotic agent 1110 functions as the computer-controlled device moving through the store to different waypoints set based on product locations. The mobile robotic agent 1110 may be any suitable type of robotic device or other computer-controlled device with a form of locomotion. The mobile robotic agent 1110 may include additional sensor systems and navigational control systems to facilitate moving through the environment. In one variation, the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”)
Regarding claim 13, Glaser et al. in combination with Khatravath et al. discloses The
network entity of claim 1,
Glaser et al. discloses wherein the one or more processors, either alone or in combination,
are further configured to: obtain information indicative of a navigation environment and information indicative of an estimated traversal distance for a plurality of potential routes from the subsequent position of the wireless device to the one or more locations proximate the one or more target assets; and ([0150] “actively directing a user to items in an environment may additionally use reactive route planning. Routes may be updated and determined based on a sensor-based planogram that can have substantially real-time conditions of item (e.g., product) locations in a retail environment. As shown in FIG. 38, a method variation directing a user to items in environment with dynamic routing may include: accessing item data of a user at an environment Silo; at a sensor-based monitoring system, monitoring location of the user within the environment S120; mapping agent path directions for the set of items within the environment based on a product location map S140, the agent path directions indicating a sequence of items; and modifying state of a subset of feedback devices S130 based on the item data and the location of the user, which comprises, sequentially updating the subset of feedback devices based on a current item in the sequence of item selection and the location of the user S137, and upon determining completion of a user-item interaction for the current item, updating the current item to a next item in the sequence of items S138.”, [0211] “can include generating, using a CV monitoring system, a product location map (P110); mapping, using a graph traversal process for route planning, order picking directions based on planogram (P120); and updating remote agent client devices with order picking directions (P130). This implementation functions to implement traveling salesman problem solution processes (or other forms of graph modeling processes) in connection with a CV-based data.”, [0222] “the agent path directions can be generalized instructions to go down certain aisles or through certain regions of a store in a recommended sequence so that different waypoints can be visited in an efficient manner.”, [0223] “Tracking can be performed by using the CV monitoring system and/or other sensor-based monitoring system to detect user-item interactions (e.g., product pickup or put-back events). This can be used to automatically update an app of a picker to reflect the items selected for an order.”, and see at least [0226] and [0238]) obtain information indicative of a detected or predicted navigation impediment for at least one impeded route segment included in a remaining portion of the first route, subsequent to transmitting at least one of the one or more navigation commands for the first route to the wireless device, one or more infrastructure devices, or the combination thereof and at a subsequent position of the wireless device; ([0056] “the system and method can be used in combination with a substantially live planogram so that real-time dynamic route planning may be used to direct a user through the store in an enhanced manner (e.g., shortest path, shortest time, less congestion, best order for a given shopping list (e.g., frozen items last)).”, and [0205] “The agent path directions may be updated while navigating the store. Accordingly, the method may include updating the agent path directions in real-time based on the location of the user. The agent path directions may be updated or changed for various reasons such as a user deviating from the path, inventory state changes, congestion in the store. Accordingly, in one variation, updating the agent path directions in real-time based on the location of the user is further based on a change in inventory status of an item. In another variation, updating the agent path directions in real-time based on the location of the user is further based on detected user congestion. Other variations of generating and adjusting the agent path directions”) select a second route to the first location proximate the first target asset based at least in part on the information indicative of the navigation environment and the information indicative of the estimated traversal distance for the plurality of potential routes from the subsequent position of the wireless device to the first location proximate the first target asset, the second route not including the at least one impeded route segment. ([0225] “when the robotic agent detects barriers or blocks in a path, or when conditions are different from reflected in the waypoint graph (e.g., floor is wet, high congestion or activity in a particular region), updating the waypoint graph with detected change in conditions around the robotic agent and updating the agent path directions using an updated waypoint graph.”, and [0226] “generating, using a CV monitoring system, a product location map of a plurality of items stocked in the item-stocked environment, by performing for a plurality of items merging product identifier data of an item with tracking, using the CV monitoring system, of the item to product stocking locations P3110; mapping, using a graph traversal process for route planning, agent path directions based on product location map P3120, which includes: detecting location of a user device agent P3121, receiving a set of product identifiers and determining a set of waypoints associated with locations of the product identifiers P3122, determining a waypoint graph for the set of waypoints based on the product location map P3123, and determining the agent path directions by performing a graph traversal process with the waypoint graph P3124; and updating the user device agent with the agent path directions P3130.”)
Regarding claim 10, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses wherein the one or more processors, either alone or in combination,
are further configured to: determine a set of navigation commands for the first route, the set of navigation commands including the one or more navigation commands for the first route; and wherein, to transmit the one or more navigation commands for the first route to the wireless device, one or more infrastructure devices, or the combination thereof, the one or more processors, either alone or in combination, are configured to transmit the one or more navigation commands sequentially. ([0202] “providing guidance relevant to multiple items in the environment, the feedback devices may have state modified when the user is present in a location where they can observe the state of the feedback device… For example, for a user with an associated shopping list, the method may incrementally provide guidance to items on the shopping list such that it first provides guidance to a first subset of items on the shopping list, then proceeds to provide guidance to a second subset of items on the shopping list. These subsets may be individual items or may, for example, be a number of items in close proximity. As discussed below, a recommended route may be generated, and a particular sequence of items may be provided such that the method iteratively cycles through the items to guide the user to items in an order based on the recommended route.”, and [0204] “determine a recommended route for a given set of items. This route may be used to determine the sequence in which the feedback items have state modified. Accordingly, the method may include mapping agent path directions for the set of items within the environment based on a product location map S140, the agent path directions indicating a sequence of items; and wherein modifying state of a subset of feedback devices based on the item data and the location of the user further comprises sequentially updating the subset of feedback devices for a current item in the sequence of items, and upon determining completion of a user-item interaction for the current item, updating the current item to a next item in the sequence of items. Accordingly, the method may determine a route specified through the agent path directions that would facilitate navigating to each item in the set of items. Then based on a sequence in which the items are visited, the method will update the feedback devices to guide a user to the next item.”)
Regarding claim 11, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 10,
Glaser et al. discloses wherein, to determine the set of navigation commands, the one or
more processors, either alone or in combination, are configured to select the set of navigation commands based at least in part on one or more wireless device capabilities, and ([0105] “The mobile robotic agent 1110 functions as the computer-controlled device moving through the store to different waypoints set based on product locations. The mobile robotic agent 1110 may be any suitable type of robotic device or other computer-controlled device with a form of locomotion. The mobile robotic agent 1110 may include additional sensor systems and navigational control systems to facilitate moving through the environment. In one variation, the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”, and [0107] “In a system variation used for a user device agent 1120 that may be used by a human user navigating the environment, the user device agent 1120 may be customized to the particular application. The user device agent 1120 may be an application or digital service provided through the computing device. In the use case where the user device agent 1120 is used to provide product picking instructions, the user device agent 1120 may be or include a picker interface. When used for order fulfilling the system may additionally include an order interface 1500.”, and see at least figure 2A-2B) further configured to transmit the one or more navigation commands to the wireless device using a signaling protocol supported by the wireless device. ([0203] “the feedback devices may be updated to deliver infrared identifying signals or electromagnetic signals (e.g., nearfield RF broadcast), audio signals (e.g., outside of human-detectable frequency range), that can be transparently detected and used to trigger user interface events. For example, a user wearing a headset may have their shopping list used to cause relevant feedback devices to begin broadcasting a signal that can be detected by the headset, that when detected cause a navigational prompt to come up to indicate information like “Product X on this aisle” and/or “Product X is near by””)
Regarding claim 14, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses wherein the one or more processors, either alone or in combination, are further configured to: determine that the wireless device has deviated from the first route, subsequent to transmitting the one or more navigation commands for the first route to the wireless device, one or more infrastructure devices, or the combination thereof; ([0205] “The agent path directions may be updated while navigating the store. Accordingly, the method may include updating the agent path directions in real-time based on the location of the user. The agent path directions may be updated or changed for various reasons such as a user deviating from the path... updating the agent path directions in real-time based on the location of the user is further based on detected user congestion. Other variations of generating and adjusting the agent path directions are discussed herein.”) and transmit one or more corrective navigation commands to the wireless device, one or more infrastructure devices, or a combination thereof. ([0295] “This variation may include detecting deviation from recommended path and updating order picking directions, which functions to adjust instructions based on sensed activities of the agent. For example, while the picking directions may specify or expect an agent to follow a particular path, the agent may take a detour or go another route, which can be detected using video/image-based person tracking or using other sensor-based tracking. The method can adapt and provided updated directions in response to detected current conditions.”)
Regarding claim 15, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses obtain an updated target asset list subsequent to transmitting the one or more navigation commands for the first route to the wireless device, one or more infrastructure devices, or the combination thereof; obtain updated information indicative of the navigation environment and information indicative of an estimated traversal distance for one or more potential routes based at least in part on the updated target asset list; and use the updated target asset list, the updated information indicative of the navigation environment for the one or more potential routes, ([0150] “actively directing a user to items in an environment may additionally use reactive route planning. Routes may be updated and determined based on a sensor-based planogram that can have substantially real-time conditions of item (e.g., product) locations in a retail environment. As shown in FIG. 38, a method variation directing a user to items in environment with dynamic routing may include: accessing item data of a user at an environment Silo; at a sensor-based monitoring system, monitoring location of the user within the environment S120; mapping agent path directions for the set of items within the environment based on a product location map S140, the agent path directions indicating a sequence of items; and modifying state of a subset of feedback devices S130 based on the item data and the location of the user, which comprises, sequentially updating the subset of feedback devices based on a current item in the sequence of item selection and the location of the user S137, and upon determining completion of a user-item interaction for the current item, updating the current item to a next item in the sequence of items S138.”, [0211] “can include generating, using a CV monitoring system, a product location map (P110); mapping, using a graph traversal process for route planning, order picking directions based on planogram (P120); and updating remote agent client devices with order picking directions (P130). This implementation functions to implement traveling salesman problem solution processes (or other forms of graph modeling processes) in connection with a CV-based data.”, [0222] “the agent path directions can be generalized instructions to go down certain aisles or through certain regions of a store in a recommended sequence so that different waypoints can be visited in an efficient manner.”, [0223] “Tracking can be performed by using the CV monitoring system and/or other sensor-based monitoring system to detect user-item interactions (e.g., product pickup or put-back events). This can be used to automatically update an app of a picker to reflect the items selected for an order.”, and see at least [0226] and [0238]) and the information indicative of the estimated traversal distance for the one or more potential routes to select a second route, wherein the second route includes at least one different route segment than a remaining portion of the first route. ([0211] “can include generating, using a CV monitoring system, a product location map (P110); mapping, using a graph traversal process for route planning, order picking directions based on planogram (P120); and updating remote agent client devices with order picking directions (P130). This implementation functions to implement traveling salesman problem solution processes (or other forms of graph modeling processes) in connection with a CV-based data.”, [0222] “the agent path directions can be generalized instructions to go down certain aisles or through certain regions of a store in a recommended sequence so that different waypoints can be visited in an efficient manner.”, [0223] “Tracking can be performed by using the CV monitoring system and/or other sensor-based monitoring system to detect user-item interactions (e.g., product pickup or put-back events). This can be used to automatically update an app of a picker to reflect the items selected for an order.”, [0251] “the CV monitoring system may track people, detect robotic agents, detect obstructions (e.g., product stocking activity, stagnant lines, crowds), and/or other conditions in the store. These may be used to update the travel scores links in the waypoint graph. Accordingly, the method can include detecting, using the computer vision monitoring system, conditions (e.g., congestion, obstructions, activity etc.) within the environment; and wherein determining the waypoint graph for the set of waypoints based on the product location map is further based on the conditions. This may, for example, be used to route an agent around crowded sections in a store.”, [0259] “detecting completion of an item selection task can be used in triggering updating of order picking directions. For example, picking directions such as the next item for picking, the path to use, and/or other details may be surfaced or presented to the user as next task directives. In some variations, the route planning in the mapping process (P120) can be updated prior to surfacing next tasks.”, and see at least [0226], [0238], and [0290])
Regarding claim 16, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses wherein the one or more processors, either alone or in combination, are further configured to: detect a position of the wireless device proximate the first target asset; and transmit, via the one or more transceivers, one or more retrieval commands to the
wireless device, one or more infrastructure devices, or a combination thereof. ([0105] “The mobile robotic agent 1110 may be any suitable type of robotic device or other computer-controlled device with a form of locomotion. The mobile robotic agent 1110 may include additional sensor systems and navigational control systems to facilitate moving through the environment. In one variation, the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”, [0249] “In one variation the waypoint graph is a graph data structure with node-link mappings where the waypoints are represented as nodes and links are representations of travel scores. A travel score may be a measure of estimated travel time, travel distance, or other metrics. In some instances, the travel score may be a score that weighs multiple factors such as time, distance, complexity, risk of changing conditions, cart or robotic agent navigation challenges, and the like. Performing the graph traversal process can include performing a traveling salesperson process minimizing travel cost for navigating the waypoints”, [0258] “This variation can detect when an agent is in proximity to an item of an assigned order, and then confirm if the correct item was selected. When an agent is assigned multiple orders, fulfillment confirmation may additionally use CV monitoring to confirm that an item was correctly selected and then correctly sorted into a correct bin, basket, bag, or other type of container.”, and figure 3 “order picking directions”)
Regarding claim 19, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 16,
Glaser et al. discloses wherein the one or more processors, either alone or in combination, are further configured to: determine a successful retrieval of the first target asset or an erroneous retrieval of a different asset; and transmit, via the one or more transceivers, an indication of the successful retrieval or the erroneous retrieval to the wireless device, one or more infrastructure devices, or a combination thereof. ([0231] “updating the user device agent with the agent path directions may include tracking user-item interactions associated with the user device agent and then detecting when an interaction is completed for each waypoint. More specifically, the waypoints are product locations, and the variation may be used for detecting a product selection for a product in a shopping list, when a product is picked up this can be used to update the currently presented agent path direction. For example, a user may receive turn-by-turn directions to product A from a mobile computing device, when in proximity to the product, the CV monitoring system (and/or other sensor-based system) can detect the user picking up the item, and then the mobile computing device can be used to reflect that product A was picked up and then the turn-by-turn directions can be updated to a next product B. This can be repeated until all products are selected. In one variation, the waypoint graph and/or the agent path directions may be updated one or more times during the picking session. In one particular variation, the waypoint graph and then the agent path directions may be updated based on real-time conditions after a product waypoint is visited so that the next product waypoint and path to that next product waypoint can be enhanced.”)
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Suzuki et al. (US 20200333789 A1).
Regarding claim 2, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
However, Glaser et al. in combination of Khatravath et al. fails to explicitly disclose wherein the one or more processors, either alone or in combination, are further configured to process the image data to detect or predict one or more navigation impediments.
Suzuki et al. teaches wherein the one or more processors, either alone or in combination, are further configured to process the image data to detect or predict one or more navigation impediments. ([0203] “The monitoring camera information determination unit 6116 performs estimation by performing coordinate transformation of position/orientation information (object position/orientation information) of an object existing in the environment on the world coordinate system based on the monitoring camera information sent from the monitoring camera information acquisition unit 6113. Furthermore, the monitoring camera information determination unit 6116 determines whether an object exists at a proper position in the environment, based on the map information sent from the map-related information management unit 6112 and the position/orientation information of the mobile object 11 sent from the position/orientation estimation unit 6114. That is, it is determined whether or not an obstacle related to the object has occurred… The monitoring camera information determination unit 6116 may be disposed at a location other than the location shown in the figure, such as a location in the monitoring camera management system 14… In a case where it is determined that the object is on the route of the mobile object 11, the object may interfere with the movement of the mobile object 11.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate obstacle detection using camera(s) as taught by Suzuki et al. for the purpose of allowing the mobile object to avoid the object that may interfere with the movement of the mobile object.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Johnson et al. (US 20220084153 A1).
Regarding claim 3, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
However, Glaser et al. in combination with Khatravath er al. fails to explicitly disclose
wherein, to use the information indicative of the navigation environment to select the first route, the one or more processors, either alone or in combination, are further configured to detect or predict a navigation impediment in an impeded segment of a non-selected route of the plurality of potential routes, wherein the non-selected route is shorter than the first route.
Johnson et al. teaches wherein, to use the information indicative of the navigation environment to select the first route, the one or more processors, either alone or in combination, are further configured to detect or predict a navigation impediment in an impeded segment of a non-selected route of the plurality of potential routes, wherein the non-selected route is shorter than the first route. ([0011] “The robot may determine an initial order execution sequence for the plurality of items in each order and wherein the step of assessing a criteria may further include assessing one or more of the order in the order sequence of each item in a region in which are located at least one operator, or the distance of travel or the travel time between the current location of the robot and each of the item locations associated with the regions in which are located at least one operator robot. The step of selecting the item location to which the robot is to navigate may be based on the shortest travel distance or shortest travel time between the current location of the robot and each of the item locations associated with the regions in which are located at least one operator robot. The step of assessing may include determining the item locations which are within a predetermined distance or travel time between the current location and the respective item locations; and the step of selecting the item location to which the robot is to navigate may be determined based on which of the item locations within the predetermined distance or travel time are next in order in the initial order execution sequence.”, and [0069] “process of adjusting the pick sequence of the robot to regions with operators, there may be situations where it is undesirable and inefficient to direct the robot to locations where there is too much congestion. In order to avoid this, the robot may communicate with a robot monitoring server that tracks congestion based on clusters of robots or operators within the navigational space to improve navigation efficiency. Where the clusters becomes too concentrated, a congested area can form, which can cause operators and robots to impede passage and travel speed of other operators and robots, causing inefficient delays and increasing collision risk.”, and [0070]”
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate adjusting the robot’s route based on an impediment (e.g. congestion) as taught Johnson et al. for the purpose of allowing the robot to “avoid such congestion.” ([0070], Johnson et al.)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Dayal (US 20160210834 A1).
Regarding claim 5, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses wherein, to obtain the information indicative of the navigation
environment for the plurality of potential routes (([0105] “the system (more specifically an agent management system 1400) supplies agent path directions, and that is used in combination with an obstacle avoidance system and/or other subsystems of the robotic agent 1110 to navigate through the environment with the objective of following the agent path directions while negotiating the immediate environment.”, [0111] “The CV monitoring system 1210 will preferably include various computing elements used in processing image data collected by an imaging system.”, [0112] “The imaging system functions to collect image data within the environment. The imaging system preferably includes a set of image capture devices. The imaging system might collect some combination of visual, infrared, depth-based, lidar, radar, sonar, and/or other types of image data.”)
However, Glaser et al. in combination with Khatravath et al. fails to explicitly disclose
…the one or more processors, either alone or in combination, are configured to receive audio data obtained by one or more audio infrastructure devices, one or more mobile devices, or both, and further configured to: detect or predict one or more navigation impediments based at least in part on the audio data; and obtain updated information indicative of the navigation environment recurrently, based on an occurrence of an event, based at least in part on a schedule, or a combination thereof.
Dayal teaches wherein, to obtain the information indicative of the navigation environment
for the plurality of potential routes, the one or more processors, either alone or in combination, are configured to receive audio data obtained by one or more audio infrastructure devices, one or more mobile devices, or both, and further configured to: detect or predict one or more navigation impediments based at least in part on the audio data; and ([0008] “the smart necklace also includes at least one microphone positioned on the first lower portion, the second lower portion or the upper portion and configured to detect audio data associated with a potential hazard. The smart necklace also includes a camera positioned on the first lower portion or the second lower portion and configured to detect image data associated with the potential hazard. The smart necklace also includes a processor coupled to the camera and the at least one microphone and configured to determine whether the potential hazard presents a real hazard based on the detected audio data and the detected image data.”, and [0089] “the display 135 that provides navigation instructions around the hazard. The speaker 132 may provide speech data directing the user around the hazard and/or the speaker 132 and/or the vibration unit 133 may provide particular tones and/or vibrations on either side of the wearable smart device 100 indicating instructions for the user to navigate around the hazard. For example, a left vibration unit may vibrate indicating that the user should move to the left in order to avoid the hazard. In some embodiments, a right vibration unit may vibrate indicating that the user should avoid moving to the right and should move left in order to avoid the hazard.”) obtain updated information indicative of the navigation environment recurrently, based on an occurrence of an event, based at least in part on a schedule, or a combination thereof. ([0043] “a network diagram with edges, or a series of coordinates with features. The map data may contain points of interest to the user and, as the user changes location, the stereo cameras 121 and/or cameras 122 may passively recognize additional points of interest and update the map data. In some embodiments, map data may be updated with audio information. For example, a factory may always output the same noise at the same frequency and volume. The wearable smart device 100 may update the map with audio data associated with the location and including the frequency and volume of the noise.”, and [0126] “The wearable smart device 900 may then update a database, such as the database 500, to include the hazard of the low hanging branch 907 at the location 912. Because the severity level is 2.5, the wearable smart device 900 may alert the user to the hazard if the wearable smart device 900 is designed to alert the user to hazards having a severity level of 2.5 or higher.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date
of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate detecting hazard/obstacle(s) using audio data as taught by Dayal for the purpose of alerting the user of a hazard and allowing the user to navigate around the detected hazard, increasing safety.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Bell et al. (US 20200064141 A1).
Regarding claim 6, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
However, Glaser et al. in combination with Khatravath et al. fails to explicitly disclose
wherein the wireless device is associated with a visually impaired user, and wherein the one or more processors are further configured to: obtain information indicative of one or more capabilities of the wireless device, a battery status of the wireless device, a medium access control (MAC) address of the wireless device, a radiofrequency (RF) status of the wireless device, or a combination thereof; and establish a connection with the wireless device.
Bell et al. teaches wherein the wireless device is associated with a visually impaired user,
and wherein the one or more processors are further configured to: obtain information indicative of one or more capabilities of the wireless device, a battery status of the wireless device, a medium access control (MAC) address of the wireless device, a radiofrequency (RF) status of the wireless device, or a combination thereof; and establish a connection with the wireless device. ([0073] “one or more sensors on a garment (e.g., a garment similar to garment 202 shown and described in connection with FIG. 2, above) worn by the user (e.g., a handicapped pedestrian) may detect dynamic obstacle(s) 620 (such as traffic cone 612) in real time as such obstacles are detected by the sensors and encountered by the user. Moreover, the detected dynamic obstacle(s) 620 may be transmitted to one or more remote servers (e.g., similar to remote servers 114 shown and described in connection with FIG. 1, and received from a user device such as a mobile phone via cellular or other communication links). After processing by the remote servers, the navigation route being presented to the user may be re-routed so that the user may thereby avoid the dynamic obstacles 620.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date
of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate obtaining information indicative of a capability of the terminal device as taught by Bell et al. for the purpose of allowing the server to provide an appropriate action to the visually impaired user’s wireless device.
Claims 8 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Zhang (US 20220373350 A1).
Regarding claim 8, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
However, Glaser et al. in combination with Khatravath et al. fails to explicitly disclose wherein the wireless device is associated with a visually impaired user and comprises a device integrated with a mobility cart, a device integrated with a mobility aid, a device with one or more speakers, a device with one or more microphones, a device with one or more cameras, a device with vibration capability, or a combination thereof.
Zhang teaches wherein the wireless device is associated with a visually impaired user and comprises a device integrated with a mobility cart, a device integrated with a mobility aid, a device with one or more speakers, a device with one or more microphones, a device with one or more cameras, a device with vibration capability, or a combination thereof. ([0162] “the user can just use the wearable device such as sensor (including gyro and pressure sensitive sensor) to input messages to the terminal device without carrying out it from a pocket or a bag. This brings much more convenience in bad weather such as cold, raining or snowing. In some embodiments herein, it can make the blind and dumb people to input messages… the navigation can be run without a screen, for example just by vibrations, this is useful to save batteries as the existing navigation software is always the huge power consumer. In some embodiments herein, the NB-IoT devices are installed in at least one facility along a road, and it is possible to do navigations even if the road is snow-covered, especially when it is under construction and/or or ruined or suffered malicious destruction. In some embodiments herein, it can help blind and hearing impairment people to walk more safely along the road as a relevant cheap solution comparing with the VR (virtual reality) glasses or AR (augmented reality) glasses”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate accommodations for impaired users as taught by Zhang for the purpose of allowing the system to further “help blind and hearing impairment people to walk more safely”. ([0162], Zhang)
Regarding claim 17, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 16,
Glaser et al. discloses wherein the wireless device ([0069] “human controlled user device agents. Human controlled user device agents may be used by customers in a retail store, workers, or any suitable user.”, [0107] “In a system variation used for a user device agent 1120 that may be used by a human user navigating the environment, the user device agent 1120 may be customized to the particular application. The user device agent 1120 may be an application or digital service provided through the computing device. In the use case where the user device agent 1120 is used to provide product picking instructions, the user device agent 1120 may be or include a picker interface. When used for order fulfilling the system may additionally include an order interface 1500.”, [0211] “he method can include generating, using a CV monitoring system, a product location map (P110); mapping, using a graph traversal process for route planning, order picking directions based on planogram (P120); and updating remote agent client devices with order picking directions (P130). This implementation functions to implement traveling salesman problem solution processes (or other forms of graph modeling processes) in connection with a CV-based data.”, and [0214] “The method may additionally or alternatively be used in providing navigational guidance to one or more user device agents used by a human user. Customers and/or workers may have a mobile computing device (e.g., a mobile phone, audio device, smart glasses, augmented reality headset, virtual reality headset, and the like) updated so that route related feedback can be provided to a user.”)
However, Glaser et al. in combination with Khatravath et al. teaches wherein the wireless device is associated with a visually impaired user, and wherein the one or more retrieval commands comprise commands to generate user movement instructions at the wireless device
Zhang teaches wherein the wireless device is associated with a visually impaired user ([0162] “the user can just use the wearable device such as sensor (including gyro and pressure sensitive sensor) to input messages to the terminal device without carrying out it from a pocket or a bag. This brings much more convenience in bad weather such as cold, raining or snowing. In some embodiments herein, it can make the blind and dumb people to input messages… the navigation can be run without a screen, for example just by vibrations, this is useful to save batteries as the existing navigation software is always the huge power consumer. In some embodiments herein, the NB-IoT devices are installed in at least one facility along a road, and it is possible to do navigations even if the road is snow-covered, especially when it is under construction and/or or ruined or suffered malicious destruction. In some embodiments herein, it can help blind and hearing impairment people to walk more safely along the road as a relevant cheap solution comparing with the VR (virtual reality) glasses or AR (augmented reality) glasses”, and [0108] “the first wearable device may guide the user based on the message. For example, the first wearable device may provide various signals (such as vibration signal, video signal, audio signal, etc. and/or combination thereof) to the user of the first wearable device to guide the user.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate accommodations for impaired users as taught by Zhang for the purpose of allowing the system to further “help blind and hearing impairment people to walk more safely”. ([0162], Zhang)
Regarding claim 18, Glaser et al. in view of Khatravath et al. and Zhang discloses
The network entity of claim 17,
Glaser et al. discloses wherein the one or more processors, either alone or in combination, are further configured to: receive, via the one or more transceivers, RF data, image data, audio data, or a combination thereof indicating user position information in response to the user movement instructions; and ([0226] “using a CV monitoring system, a product location map of a plurality of items stocked in the item-stocked environment, by performing for a plurality of items merging product identifier data of an item with tracking, using the CV monitoring system, of the item to product stocking locations P3110; mapping, using a graph traversal process for route planning, agent path directions based on product location map P3120, which includes: detecting location of a user device agent P3121”, and [0231] “a user may receive turn-by-turn directions to product A from a mobile computing device, when in proximity to the product, the CV monitoring system (and/or other sensor-based system) can detect the user picking up the item, and then the mobile computing device can be used to reflect that product A was picked up and then the turn-by-turn directions can be updated to a next product B.”) transmit, via the one or more transceivers, one or more additional retrieval commands to the wireless device, one or more infrastructure devices, or the combination thereof based at least in part on the user position information. ([0231] “updating the user device agent with the agent path directions may include tracking user-item interactions associated with the user device agent and then detecting when an interaction is completed for each waypoint. More specifically, the waypoints are product locations, and the variation may be used for detecting a product selection for a product in a shopping list, when a product is picked up this can be used to update the currently presented agent path direction. For example, a user may receive turn-by-turn directions to product A from a mobile computing device, when in proximity to the product, the CV monitoring system (and/or other sensor-based system) can detect the user picking up the item, and then the mobile computing device can be used to reflect that product A was picked up and then the turn-by-turn directions can be updated to a next product B. This can be repeated until all products are selected. In one variation, the waypoint graph and/or the agent path directions may be updated one or more times during the picking session. In one particular variation, the waypoint graph and then the agent path directions may be updated based on real-time conditions after a product waypoint is visited so that the next product waypoint and path to that next product waypoint can be enhanced.”)
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Wang (CN 115497046 A).
Regarding claim 9, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
Glaser et al. discloses determine a position of the wireless device based at least in part on detecting the user([0082] “sensor-based monitoring system 130 comprising configuration to perform operations comprising: monitoring location of the user within the environment”, and [0161] “Monitoring location of the user preferably provides resolution or capabilities to detect location of the user at least when in proximity to relevant feedback devices. Detecting a user in proximity to a feedback device in region of a targeted item, functions to determine when state should be changed. This can be used so that feedback devices are not changed until the relevant user is an appropriate location in the environment (e.g., the store). For example, digital price tags serving as the feedback devices may update when a user get in the appropriate section of a grocery store aisle.”)
However, Glaser et al. in combination with Khatravath et al. fails to explicitly disclose user characteristics and wherein a detectable user characteristic is associated with a user identifier of the wireless device, wherein the user characteristic is detectable by image data, audio data, RF data, or a combination thereof, and wherein the one or more processors, either alone or in combination, are further configured to: detect the user characteristic using image data, audio data, RF data or the combination thereof received from at least one infrastructure device; and
Wang teaches wherein a detectable user characteristic is associated with a user identifier of the wireless device, wherein the user characteristic is detectable by image data, audio data, RF data, or a combination thereof, and wherein the one or more processors, either alone or in combination, are further configured to: detect the user characteristic using image data, audio data, RF data or the combination thereof received from at least one infrastructure device; and (Page 6 lines 1-5 “The user who enters the warehouse through the image acquisition device performs image acquisition to obtain user image information; performs feature extraction on the user image information to obtain user characteristic information”, and page 10 lines 27-32 “when a user enters the warehouse, when an image acquisition device collects user information, the identification request of the entering user is sent to the system, and the system synchronously sends instructions to the image acquisition device, and the image acquisition device tracks the user's characteristics throughout the process Acquisition, 30 image tracking and recognition according to user characteristics, to determine the real-time location of the user.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate image tracking and recognition of a user as taught by Wang for the purpose of “improving the safety and reliability of the inventory management” within a warehouse. (Page 11, lines 10 - 13, Wang)
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Kao (US 20160135001 A1).
Regarding claim 12, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 11,
However, Glaser et al. in combination with Khatravath et al. fails to explicitly disclose wherein the wireless device includes one or more microphones, and wherein the one or more processors, either alone or in combination, are configured to transmit the one or more navigation commands using a signaling protocol supported by the wireless device by instructing at least one infrastructure speaker to transmit the navigation commands using a signaling protocol using sound waves, and are further configured to: establish a connection with the wireless device using sound wave signaling, RF signaling, or a combination thereof.
Kao teaches wherein the wireless device includes one or more microphones, (102, figure 1) and wherein the one or more processors, either alone or in combination, are configured to transmit the one or more navigation commands using a signaling protocol supported by the wireless device by instructing at least one infrastructure speaker to transmit the navigation commands using a signaling protocol using sound waves, and are further configured to: ([0016] “a determination that a mobile computing device 102 is proximate to a particular location (e.g., a speaker 104 located within a particular store of a retail entity, such as “Tool Depot” in the example of FIG. 1) based on an exchange of audio signals (e.g., in a human inaudible frequency range)…when the mobile computing device 102 is proximate to the particular location, data may be communicated from the mobile computing device 102 to the microphone 110 via the first audio signal 106, from the speaker 104 to the mobile computing device 102 via the second audio signal 108, or a combination thereof.”, [0020] “the speaker 104 is located at an end of the first aisle 116 (e.g., at an “end cap” of the first aisle 116), it will be appreciated that the speaker 104 may be located at another location (e.g., near particular merchandise, such as the item 138, on the first aisle 116). In some cases, the speaker location data 124 may include an identifier of the particular retail display (e.g., an identifier, such as a unique retail display identifier, associated with the first retail display 114) and an identifier of the particular retail location (e.g., a unique retail location identifier, such as a store number or other identifier of the particular “Tool Depot” location). In some cases, the item data 136 may identify a plurality of items associated with the particular retail display (e.g., the first retail display 114) based on mapping information for the particular retail display and the particular retail location available to the server 126.”, [0021] “the mobile computing device 102 may be configured to receive the item data 136 that identifies the item 138 that is located proximate to the speaker 104 and to display information 140 that is descriptive of the item 138 based on the item data 136.”, and [0024] “the mobile computing device 102 may display the information 140 associated with the item 138 that is located proximate to the speaker 104. The information 140 may be received at the mobile computing device 102 when the mobile computing device 102 is located within a particular range of the speaker 104. The speaker location data 124 may be encoded into the second audio signal 108, and the mobile computing device 102 may send the speaker location data 124 to the server 126 (e.g., via the wireless network 128) and may receive the item data 136 from the server 126”) establish a connection with the wireless device (102, Figure 1) using sound wave signaling, RF signaling, or a combination thereof. (108, Figure 1, and [0016] “provided in response to a determination that a mobile computing device 102 is proximate to a particular location (e.g., a speaker 104 located within a particular store of a retail entity, such as “Tool Depot” in the example of FIG. 1) based on an exchange of audio signals (e.g., in a human inaudible frequency range). For example, the mobile computing device 102 may emit a first audio signal 106, and the speaker 104 may emit a second audio signal 108. In some cases, the second audio signal 108 may be detectable at the mobile computing device 102 when the mobile computing device 102 is within a particular range of the speaker 104 (e.g., at a distance of less than about 1 to 2 meters from the speaker 104)… that when the mobile computing device 102 is proximate to the particular location, data may be communicated from the mobile computing device 102 to the microphone 110 via the first audio signal 106, from the speaker 104 to the mobile computing device 102 via the second audio signal 108, or a combination thereof.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate infrastructure speaker as taught by Kao for the purpose of informing the user with information regarding a location of the item(s) within the user’s proximity, increasing user usability.
Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Glaser et al. (US 20240144354 A1) in view of Khatravath et al. (US 20180137452 A1) and further in view of Correnti et al. (US 20210158030 A1).
Regarding claim 20, Glaser et al. in view of Khatravath et al. discloses The network entity of
claim 1,
However, Glaser et al. in combination with Khatravath et al. fails explicitly disclose wherein the one or more processors, either alone or in combination, are further configured to: associate a user identifier to a visually impaired person without a device configured to communicate with the network entity; associate the user identifier with a detectable user characteristic, the user characteristic detectable by image data, audio data, RF data, haptic data, sensor data, or a combination thereof; detect the user characteristic using image data, audio data, RF data, haptic data, sensor data, or a combination thereof received from at least one infrastructure device; and determine a position of the visually impaired person based at least in part on detecting the user characteristic.
Correnti et al. teaches the one or more processors, either alone or in combination, are further configured to: associate a user identifier to a visually impaired person without a device configured to communicate with the network entity; (Figure 1, and [0039] “In stage A, the monitoring system 111 processes an impairment profile for each user.”) associate the user identifier with a detectable user characteristic, the user characteristic detectable by image data, audio data, RF data, haptic data, sensor data, or a combination thereof; (Figure 1, 116)
detect the user characteristic using image data, audio data, RF data, haptic data, sensor data, or a combination thereof received from at least one infrastructure device; and ([0031] “Within the monitoring system, control algorithms determine the data related to user 102 corresponds with known characteristics of a blind person. This determination can be made by matching visual, motion and other sensor data to known characteristics that correspond with known impairments. Image analysis based on the feed from camera 101 shows a white cane moving back and forth in front of user 102. The monitoring system can match this analysis with existing data to reach a conclusion. The monitoring system summarily determines that user 102 is blind.”) determine a position of the visually impaired person based at least in part on detecting the user characteristic. ([0064] “an impairment profile corresponding to the user is updated based on determining that the user located at the property exhibits symptoms of the impairment. For example, each user of one or more users at a property or recognized by a system such as the system 111, can be associated with an impairment profile. A system, such as the system 111, can be used to store and update one or more of the impairment profiles associated with the one or more users at a property or recognized by a given system. The system can recognize a given user at a specific location and use a corresponding impairment profile to communicate with the user.”)
It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention with reasonable expectations of success to modify the invention of Glaser et al. in combination with Khatravath et al. to incorporate associating a user identifier with a detectable user characteristic using image data as taught by Correnti et al. for the purpose of allowing the implemented system to provide appropriate help to the user(s).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MISA HUYNH NGUYEN whose telephone number is (571)270-5604. The examiner can normally be reached Monday-Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MISA H NGUYEN/ Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/ Supervisory Patent Examiner, Art Unit 3666