DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. DE102023136752.8, filed on 12/28/2023.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 11-14 and 17-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sun (US 2022/0289501 A1).
Regarding claim 1, Sun teaches a control device for controlling a robotic device for picking at least one object from a storage box, the control device comprising: a memory device storage, for each storage box of a plurality of storage boxes, respectively associated first box information indicating which multiple objects are arranged in the storage box and in which arrangement the multiple objects are in the storage box [(see at least Fig.2B, paragraphs 29-31) As in 29 “a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task” As in 31 “A robotic system to perform singulation is disclosed. As used herein, singulation of an item includes picking an item from a source pile/flow and placing the item on a conveyance structure (e.g., a segmented conveyor or similar conveyance). Optionally, singulation may include sortation of the various items on the conveyance structure such as via singly placing the items from the source pile/flow into a slot or tray on the conveyor. In various embodiments, singulation and/or sortation is disclosed.”]; and a processor, configured to: receive identification data representing an identification of a storage box [(see at least paragraph 36) “Machine readers, such as radio-frequency (RF) tag readers, optical code readers, etc., may need items to be spaced apart from one another, a process sometimes referred to as “singulation,” to be able to reliably read a tag or code and for the system to associate the resulting information with a specific item, such as an item in a specific location on a conveyor or other structure or instrumentality.”]; determine the first box information associated with the storage box using the identification data; receive sensor data representing an image of the multiple objects arranged in the storage box [(see at least Fig.2A, paragraph 49) “In the example shown in FIG. 2A, system 200 includes image sensors, including in this example 3D cameras 214 and 216. In various embodiments, other types of sensors may be used (individually or in combination) in a singulation system as disclosed herein, including a camera, an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, a weight sensor, and the like. In various embodiments, control computer 212 includes a workspace environment state system such as a vision system used to discern individual items, debris on the workspace, and each item's orientation based on sensor data such as image data provided by image sensors, including in this example 3D cameras 214 and 216. The workspace environment state system in some embodiments includes sensors in the robotic arm to detect a weight of an item (e.g., a grasped item) or to detect information from which an estimated weight is determined. For example, information pertaining to an amount of current, voltage, and/or power used by one or more motors driving movement of the robotic arm can be used to determine the weight (or an estimated weight) of the item. As another example, the chute includes a weight sensor, and the weight of the item is determined based on a difference of the weight on the chute as measured by the weight sensor before the item is picked up and after the item is picked up.”]; determine, via object recognition using the sensor data, second box information indicating which objects are arranged in the storage box [(see at least Fig.2A, paragraph 49) “As another example, information pertaining to an output from one or more sensor arrays can be used to determine a location of the item in the workspace, a location of the item while the item is grasped and/or being moved by the robotic arm, and/or a location of the robotic arm (e.g., based on a determination of an output from a subset of sensors of the one or more sensor arrays compared to another subset of sensors of the one or more sensor arrays).”]; and generate control instructions for controlling the robotic device to pick up at least one object of the multiple objects based on the first box information associated with the storage box and the second box information [(see at least paragraph 49) “In the example shown, one or more of robotic arm 202, end effector 204, and conveyor 208 are operated in coordination by control computer 212. In various embodiments, a robotic singulation as disclosed herein may include one or more sensors from which an environment of the workspace is modeled. In the example shown in FIG. 2A, system 200 includes image sensors, including in this example 3D cameras 214 and 216. In various embodiments, other types of sensors may be used (individually or in combination) in a singulation system as disclosed herein, including a camera, an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, a weight sensor, and the like.”].
Regarding claim 2, Sun teaches wherein the processor is configured to determine the respectively associated first box information of a storage box of the plurality of storage boxes in that the processor for each object of the multiple objects iteratively: receives object identification data associated with the object before the object is arranged in the storage box, the object identification data representing an identification of the object; stores the first box information associated with the object in the memory device, wherein the first box information comprise the object identification data; and wherein the processor is configured to store the first box information in the memory device, wherein the arrangement of the multiple objects in the storage box comprises an order corresponding to the order in which the processor receives the object identification data of the multiple objects. [(see at least Fig.3A, paragraphs 93-99) As in 93 “In some embodiments, process 300 is implemented by a robot system operating to singulate one or more items within a workspace, such as system 200 of FIG. 2A and FIG. 2B. The robot system includes one or more processors (e.g., in control computer 212 in the examples shown in FIGS. 2A and 2B) which operate, including by performing the process 300, to cause a robotic structure (e.g., a robotic arm) to pick and place items for sorting.” As in 94 “At 310, sensor data pertaining to the workspace is obtained. In some embodiments, a robotic system obtains the sensor data pertaining to the workspace from one or more sensors operating within the system. As an example, the sensor data is obtained based at least in part on outputs from image sensors (e.g., 2D or 3D cameras), an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, and the like.” As in 97 “the plan or strategy to singulate the one or more items in the workspace is determined based at least in part on the sensor data. For example, the plan or strategy to singulate the one or more items includes selecting an item within the source pile/flow that is to be singulated. The selected item can be identified from among other items or objects within the workspace based at least in part on the sensor data (e.g., the boundaries of the item and other items or objects within the workspace can be determined). As an example, one or more characteristics pertaining to the selected item is determined based at least in part on the sensor data. The one or more characteristics pertaining to the selected item can include a dimension of the item, a packaging of the item, one or more identifiers or labels on the item (e.g., an indicator that the item is fragile, a shipping label on the item, etc.), an estimated weight of the item, and the like, or any combination thereof. As another example, the plan to singulate the one or more items includes determining a location on the conveyance structure (e.g., a slot on the conveyor) at which the robotic structure (e.g., the robotic arm) is to singly place the item. The location on the conveyance structure at which the item is to be placed can be determined based at least in part on a timestamp, a speed of the conveyor, and one or more characteristics of a slot in the conveyor (e.g., an indication of whether the slot is occupied or reserved), and the like, or any combination thereof. As another example, the plan or strategy to singulate the one or more items includes determining a path or trajectory of the item along which the robotic arm is to move the item during singulation. The path or trajectory of the item along which the item is to be moved can be determined based at least in part on a location of one or more other objects within the workspace such as a frame of the chute, other items in the source pile/flow, items on the conveyor, other robots operating within the workspace, a reserved airspace for operation of other robots, sensors within the workspace, etc. For example, the path or trajectory of the item is determined to move a part of the item comprising an identifier (e.g., a shipping label) to an area at which a scanner is able to scan the identifier, or the path or trajectory of the item is determined to maximize a likelihood that the identifier on the item is read by one or more scanners along the path or trajectory.”]
Regarding claim 3, Sun teaches wherein the processor configured to iteratively for each object of the multiple objects: receive packing station sensor data associated with the object, the packing station sensor data representing an image of the one or more objects arranged in the storage box after the object has been placed in the storage box; using the packing station sensor data, determine position data associated with the object, the position data indicating a position of the object in the storage box; wherein the arrangement comprises the position data; and wherein the processor is configured to generate the control instructions using the position data. [(see at least Fig. 3A, paragraphs 95-101) As in 97 “The one or more characteristics pertaining to the selected item can include a dimension of the item, a packaging of the item, one or more identifiers or labels on the item (e.g., an indicator that the item is fragile, a shipping label on the item, etc.), an estimated weight of the item, and the like, or any combination thereof. As another example, the plan to singulate the one or more items includes determining a location on the conveyance structure (e.g., a slot on the conveyor) at which the robotic structure (e.g., the robotic arm) is to singly place the item. The location on the conveyance structure at which the item is to be placed can be determined based at least in part on a timestamp, a speed of the conveyor, and one or more characteristics of a slot in the conveyor (e.g., an indication of whether the slot is occupied or reserved), and the like, or any combination thereof. As another example, the plan or strategy to singulate the one or more items includes determining a path or trajectory of the item along which the robotic arm is to move the item during singulation. The path or trajectory of the item along which the item is to be moved can be determined based at least in part on a location of one or more other objects within the workspace such as a frame of the chute, other items in the source pile/flow, items on the conveyor, other robots operating within the workspace, a reserved airspace for operation of other robots, sensors within the workspace, etc.” As in 98 “the item is singulated in response to the plan or strategy for singulating the item being determined. For example, a robotic arm is operated to pick one or more items from the workspace and place each item singly in a corresponding location in a singulation conveyance structure. The singulation of the item comprises picking the item from the workspace (e.g., from the source pile/flow) and singly placing the item on the conveyance structure. The robot system singulates the item based at least in part on the plan or strategy for singulating the item.”]
Regarding claim 11, Sun teaches the control device of claim 1; and an unpacking station for unpacking the storage box, wherein the unpacking station comprises the robotic device and a sensor configured to acquire the sensor data. [(see at least paragraph 31) “For example, the robotic system includes a plurality of robotic arms at the same workspace and the plurality of robotic arms operate to pick a plurality of items from the source pile/flow and place the items on the singulation conveyance structure. The plurality of robotic arms may operate autonomously and independently.”]
Regarding claim 12, Sun teaches a method for controlling a robotic device for picking at least one object from a storage box, the method comprising: receiving identification data representing an identification of a storage box [(see at least Fig.2B, paragraphs 29-31, 36,49) As in 29 “a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task” As in 31 “A robotic system to perform singulation is disclosed. As used herein, singulation of an item includes picking an item from a source pile/flow and placing the item on a conveyance structure (e.g., a segmented conveyor or similar conveyance). Optionally, singulation may include sortation of the various items on the conveyance structure such as via singly placing the items from the source pile/flow into a slot or tray on the conveyor. In various embodiments, singulation and/or sortation is disclosed.”]; determining first box information associated with the storage box using the identification data, wherein the first box information indicate which multiple objects are arranged in the storage box and in which arrangement the multiple objects are in the storage box; receiving sensor data representing an image of the multiple objects arranged in the storage box [(see at least Fig.2A, paragraph 49) “In the example shown in FIG. 2A, system 200 includes image sensors, including in this example 3D cameras 214 and 216. In various embodiments, other types of sensors may be used (individually or in combination) in a singulation system as disclosed herein, including a camera, an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, a weight sensor, and the like. In various embodiments, control computer 212 includes a workspace environment state system such as a vision system used to discern individual items, debris on the workspace, and each item's orientation based on sensor data such as image data provided by image sensors, including in this example 3D cameras 214 and 216. The workspace environment state system in some embodiments includes sensors in the robotic arm to detect a weight of an item (e.g., a grasped item) or to detect information from which an estimated weight is determined. For example, information pertaining to an amount of current, voltage, and/or power used by one or more motors driving movement of the robotic arm can be used to determine the weight (or an estimated weight) of the item. As another example, the chute includes a weight sensor, and the weight of the item is determined based on a difference of the weight on the chute as measured by the weight sensor before the item is picked up and after the item is picked up.”]; determining second box information via object recognition using the sensor data, wherein the second box information indicate which objects are arranged in the storage box [(see at least Fig.2A, paragraph 49) “As another example, information pertaining to an output from one or more sensor arrays can be used to determine a location of the item in the workspace, a location of the item while the item is grasped and/or being moved by the robotic arm, and/or a location of the robotic arm (e.g., based on a determination of an output from a subset of sensors of the one or more sensor arrays compared to another subset of sensors of the one or more sensor arrays).”]; and generating control instructions for controlling the robotic device to pick up at least one object of the multiple objects based on the first box information and the second box information. [(see at least paragraph 49) “In the example shown, one or more of robotic arm 202, end effector 204, and conveyor 208 are operated in coordination by control computer 212. In various embodiments, a robotic singulation as disclosed herein may include one or more sensors from which an environment of the workspace is modeled. In the example shown in FIG. 2A, system 200 includes image sensors, including in this example 3D cameras 214 and 216. In various embodiments, other types of sensors may be used (individually or in combination) in a singulation system as disclosed herein, including a camera, an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, a weight sensor, and the like.”].
Regarding claim 13, Sun teaches further comprising determining the respectively associated first box information of a storage box of a plurality of storage boxes by, for each object of the multiple objects, iteratively receiving object identification data associated with the object before the object is arranged in the storage box, the object identification data representing an identification of the object; storing the first box information associated with the object in a memory device, wherein the first box information comprise the object identification data; and storing the first box information in the memory device, wherein the arrangement of the multiple objects in the storage box comprises an order corresponding to the order in which the object identification data of the multiple objects is received. [(see at least Fig.3A, paragraphs 93-99) As in 93 “In some embodiments, process 300 is implemented by a robot system operating to singulate one or more items within a workspace, such as system 200 of FIG. 2A and FIG. 2B. The robot system includes one or more processors (e.g., in control computer 212 in the examples shown in FIGS. 2A and 2B) which operate, including by performing the process 300, to cause a robotic structure (e.g., a robotic arm) to pick and place items for sorting.” As in 94 “At 310, sensor data pertaining to the workspace is obtained. In some embodiments, a robotic system obtains the sensor data pertaining to the workspace from one or more sensors operating within the system. As an example, the sensor data is obtained based at least in part on outputs from image sensors (e.g., 2D or 3D cameras), an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, and the like.” As in 97 “the plan or strategy to singulate the one or more items in the workspace is determined based at least in part on the sensor data. For example, the plan or strategy to singulate the one or more items includes selecting an item within the source pile/flow that is to be singulated. The selected item can be identified from among other items or objects within the workspace based at least in part on the sensor data (e.g., the boundaries of the item and other items or objects within the workspace can be determined). As an example, one or more characteristics pertaining to the selected item is determined based at least in part on the sensor data. The one or more characteristics pertaining to the selected item can include a dimension of the item, a packaging of the item, one or more identifiers or labels on the item (e.g., an indicator that the item is fragile, a shipping label on the item, etc.), an estimated weight of the item, and the like, or any combination thereof. As another example, the plan to singulate the one or more items includes determining a location on the conveyance structure (e.g., a slot on the conveyor) at which the robotic structure (e.g., the robotic arm) is to singly place the item. The location on the conveyance structure at which the item is to be placed can be determined based at least in part on a timestamp, a speed of the conveyor, and one or more characteristics of a slot in the conveyor (e.g., an indication of whether the slot is occupied or reserved), and the like, or any combination thereof. As another example, the plan or strategy to singulate the one or more items includes determining a path or trajectory of the item along which the robotic arm is to move the item during singulation. The path or trajectory of the item along which the item is to be moved can be determined based at least in part on a location of one or more other objects within the workspace such as a frame of the chute, other items in the source pile/flow, items on the conveyor, other robots operating within the workspace, a reserved airspace for operation of other robots, sensors within the workspace, etc. For example, the path or trajectory of the item is determined to move a part of the item comprising an identifier (e.g., a shipping label) to an area at which a scanner is able to scan the identifier, or the path or trajectory of the item is determined to maximize a likelihood that the identifier on the item is read by one or more scanners along the path or trajectory.”]
Regarding claim 14, Sun teaches further comprising iteratively: receiving packing station sensor data associated with the object, the packing station sensor data representing an image of the one or more objects arranged in the storage box after the object has been placed in the storage box; wherein the arrangement comprises position data indicating a position of the object in the storage box; and generating the control instructions using the position data. [(see at least Fig. 3A, paragraphs 95-101) As in 97 “The one or more characteristics pertaining to the selected item can include a dimension of the item, a packaging of the item, one or more identifiers or labels on the item (e.g., an indicator that the item is fragile, a shipping label on the item, etc.), an estimated weight of the item, and the like, or any combination thereof. As another example, the plan to singulate the one or more items includes determining a location on the conveyance structure (e.g., a slot on the conveyor) at which the robotic structure (e.g., the robotic arm) is to singly place the item. The location on the conveyance structure at which the item is to be placed can be determined based at least in part on a timestamp, a speed of the conveyor, and one or more characteristics of a slot in the conveyor (e.g., an indication of whether the slot is occupied or reserved), and the like, or any combination thereof. As another example, the plan or strategy to singulate the one or more items includes determining a path or trajectory of the item along which the robotic arm is to move the item during singulation. The path or trajectory of the item along which the item is to be moved can be determined based at least in part on a location of one or more other objects within the workspace such as a frame of the chute, other items in the source pile/flow, items on the conveyor, other robots operating within the workspace, a reserved airspace for operation of other robots, sensors within the workspace, etc.” As in 98 “the item is singulated in response to the plan or strategy for singulating the item being determined. For example, a robotic arm is operated to pick one or more items from the workspace and place each item singly in a corresponding location in a singulation conveyance structure. The singulation of the item comprises picking the item from the workspace (e.g., from the source pile/flow) and singly placing the item on the conveyance structure. The robot system singulates the item based at least in part on the plan or strategy for singulating the item.”]
Regarding claim 17, Sun teaches a non-transitory computer-readable medium storing instructions, which, when executed by a processor, cause the processor to: receive identification data representing an identification of a storage box [(see at least paragraphs 29,36) As in 36 “Machine readers, such as radio-frequency (RF) tag readers, optical code readers, etc., may need items to be spaced apart from one another, a process sometimes referred to as “singulation,” to be able to reliably read a tag or code and for the system to associate the resulting information with a specific item, such as an item in a specific location on a conveyor or other structure or instrumentality.”]; determine first box information associated with the storage box using the identification data, wherein the first box information indicate which multiple objects are arranged in the storage box and in which arrangement the multiple objects are in the storage box; receive sensor data representing an image of the multiple objects arranged in the storage box [(see at least Fig.2A, paragraph 49) “In the example shown in FIG. 2A, system 200 includes image sensors, including in this example 3D cameras 214 and 216. In various embodiments, other types of sensors may be used (individually or in combination) in a singulation system as disclosed herein, including a camera, an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, a weight sensor, and the like. In various embodiments, control computer 212 includes a workspace environment state system such as a vision system used to discern individual items, debris on the workspace, and each item's orientation based on sensor data such as image data provided by image sensors, including in this example 3D cameras 214 and 216. The workspace environment state system in some embodiments includes sensors in the robotic arm to detect a weight of an item (e.g., a grasped item) or to detect information from which an estimated weight is determined. For example, information pertaining to an amount of current, voltage, and/or power used by one or more motors driving movement of the robotic arm can be used to determine the weight (or an estimated weight) of the item. As another example, the chute includes a weight sensor, and the weight of the item is determined based on a difference of the weight on the chute as measured by the weight sensor before the item is picked up and after the item is picked up.”]; determine second box information via object recognition using the sensor data, wherein the second box information indicate which objects are arranged in the storage box [(see at least Fig.2A, paragraph 49) “As another example, information pertaining to an output from one or more sensor arrays can be used to determine a location of the item in the workspace, a location of the item while the item is grasped and/or being moved by the robotic arm, and/or a location of the robotic arm (e.g., based on a determination of an output from a subset of sensors of the one or more sensor arrays compared to another subset of sensors of the one or more sensor arrays).”]; and generate control instructions for controlling a robotic device to pick up at least one object of the multiple objects based on the first box information and the second box information. [(see at least paragraph 49) “In the example shown, one or more of robotic arm 202, end effector 204, and conveyor 208 are operated in coordination by control computer 212. In various embodiments, a robotic singulation as disclosed herein may include one or more sensors from which an environment of the workspace is modeled. In the example shown in FIG. 2A, system 200 includes image sensors, including in this example 3D cameras 214 and 216. In various embodiments, other types of sensors may be used (individually or in combination) in a singulation system as disclosed herein, including a camera, an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, a weight sensor, and the like.”].
Regarding claim 18, Sun teaches wherein the instructions are further configured to cause the processor to determine the respectively associated first box information of a storage box of a plurality of storage boxes by, for each object of the multiple objects, iteratively receiving object identification data associated with the object before the object is arranged in the storage box, the object identification data representing an identification of the object; storing the first box information associated with the object in a memory device, wherein the first box information comprise the object identification data; and storing the first box information in the memory device, wherein the arrangement of the multiple objects in the storage box comprises an order corresponding to the order in which the object identification data of the multiple objects is received. [(see at least Fig.3A, paragraphs 93-99) As in 93 “In some embodiments, process 300 is implemented by a robot system operating to singulate one or more items within a workspace, such as system 200 of FIG. 2A and FIG. 2B. The robot system includes one or more processors (e.g., in control computer 212 in the examples shown in FIGS. 2A and 2B) which operate, including by performing the process 300, to cause a robotic structure (e.g., a robotic arm) to pick and place items for sorting.” As in 94 “At 310, sensor data pertaining to the workspace is obtained. In some embodiments, a robotic system obtains the sensor data pertaining to the workspace from one or more sensors operating within the system. As an example, the sensor data is obtained based at least in part on outputs from image sensors (e.g., 2D or 3D cameras), an infrared sensor array, a laser array, a scale, a gyroscope, a current sensor, a voltage sensor, a power sensor, a force sensor, a pressure sensor, and the like.” As in 97 “the plan or strategy to singulate the one or more items in the workspace is determined based at least in part on the sensor data. For example, the plan or strategy to singulate the one or more items includes selecting an item within the source pile/flow that is to be singulated. The selected item can be identified from among other items or objects within the workspace based at least in part on the sensor data (e.g., the boundaries of the item and other items or objects within the workspace can be determined). As an example, one or more characteristics pertaining to the selected item is determined based at least in part on the sensor data. The one or more characteristics pertaining to the selected item can include a dimension of the item, a packaging of the item, one or more identifiers or labels on the item (e.g., an indicator that the item is fragile, a shipping label on the item, etc.), an estimated weight of the item, and the like, or any combination thereof. As another example, the plan to singulate the one or more items includes determining a location on the conveyance structure (e.g., a slot on the conveyor) at which the robotic structure (e.g., the robotic arm) is to singly place the item. The location on the conveyance structure at which the item is to be placed can be determined based at least in part on a timestamp, a speed of the conveyor, and one or more characteristics of a slot in the conveyor (e.g., an indication of whether the slot is occupied or reserved), and the like, or any combination thereof. As another example, the plan or strategy to singulate the one or more items includes determining a path or trajectory of the item along which the robotic arm is to move the item during singulation. The path or trajectory of the item along which the item is to be moved can be determined based at least in part on a location of one or more other objects within the workspace such as a frame of the chute, other items in the source pile/flow, items on the conveyor, other robots operating within the workspace, a reserved airspace for operation of other robots, sensors within the workspace, etc. For example, the path or trajectory of the item is determined to move a part of the item comprising an identifier (e.g., a shipping label) to an area at which a scanner is able to scan the identifier, or the path or trajectory of the item is determined to maximize a likelihood that the identifier on the item is read by one or more scanners along the path or trajectory.”]
Regarding claim 19, Sun teaches wherein the instructions are further configured to cause the processor to iteratively: receive packing station sensor data associated with the object, the packing station sensor data representing an image of the one or more objects arranged in the storage box after the object has been placed in the storage box; use the packing station sensor data associated with the object, position data indicating a position of the object in the storage box; wherein the arrangement comprises the position data; and generate the control instructions using the position data. [(see at least Fig. 3A, paragraphs 95-101) As in 97 “The one or more characteristics pertaining to the selected item can include a dimension of the item, a packaging of the item, one or more identifiers or labels on the item (e.g., an indicator that the item is fragile, a shipping label on the item, etc.), an estimated weight of the item, and the like, or any combination thereof. As another example, the plan to singulate the one or more items includes determining a location on the conveyance structure (e.g., a slot on the conveyor) at which the robotic structure (e.g., the robotic arm) is to singly place the item. The location on the conveyance structure at which the item is to be placed can be determined based at least in part on a timestamp, a speed of the conveyor, and one or more characteristics of a slot in the conveyor (e.g., an indication of whether the slot is occupied or reserved), and the like, or any combination thereof. As another example, the plan or strategy to singulate the one or more items includes determining a path or trajectory of the item along which the robotic arm is to move the item during singulation. The path or trajectory of the item along which the item is to be moved can be determined based at least in part on a location of one or more other objects within the workspace such as a frame of the chute, other items in the source pile/flow, items on the conveyor, other robots operating within the workspace, a reserved airspace for operation of other robots, sensors within the workspace, etc.” As in 98 “the item is singulated in response to the plan or strategy for singulating the item being determined. For example, a robotic arm is operated to pick one or more items from the workspace and place each item singly in a corresponding location in a singulation conveyance structure. The singulation of the item comprises picking the item from the workspace (e.g., from the source pile/flow) and singly placing the item on the conveyance structure. The robot system singulates the item based at least in part on the plan or strategy for singulating the item.”]
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4-5, 15-16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Lovett (US 2022/0334560 A1).
Regarding claim 4, Sun has all of the elements as claim 3 as discussed above.
Sun does not explicitly teach wherein the processor is configured to: determine stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generate the control instructions using the stacking information
However, Lovett teaches wherein the processor is configured to: determine stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generate the control instructions using the stacking information. [(see at least paragraph 50) “The respective robots 112, 114 are operated at the same time, fully autonomously, to pick trays from source tray stacks 102, 104 and place them on destination tray stacks, such as destination tray stacks 120, 122, in a destination tray stack assembly area on an opposite side of rail 110 from conveyance 106 and source tray stacks 102, 104. The destination tray stacks may be assembled, in various embodiments, according to invoice, manifest, order, or other information. For example, for each of a plurality of physical destinations (e.g., retail stores), a destination stack associated with that destination (e.g., according to an order placed by the destination) is built by selecting trays from respective source tray stacks 102, 104 and stacking them on a corresponding destination tray stack 120, 122. Completed destination tray stacks 120, 122 may be removed from the destination tray stack assembly area, as indicated by arrow 124, e.g., to be place on trucks, rail cars, containers, etc. for delivery to a further destination, such as a retail store.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Sun to incorporate the teachings of Lovett of wherein the processor is configured to: determine stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generate the control instructions using the stacking information in order to generate/identify a plan to stack items according to various properties. [(Lovett 135)]
Regarding claim 5, Modified Sun has all of the elements of claim 4 as discussed above.
Sun does not explicitly teach wherein the processor is further configured to determine, using the arrangement, stacking information indicating how a plurality of objects are stacked on top of each other in the storage box; and to generate the control instructions using the stacking information.
However, Lovett teaches wherein the processor is further configured to determine, using the arrangement, stacking information indicating how a plurality of objects are stacked on top of each other in the storage box; and to generate the control instructions using the stacking information. [(see at least paragraphs 50-57) As in 50 “The respective robots 112, 114 are operated at the same time, fully autonomously, to pick trays from source tray stacks 102, 104 and place them on destination tray stacks, such as destination tray stacks 120, 122, in a destination tray stack assembly area on an opposite side of rail 110 from conveyance 106 and source tray stacks 102, 104. The destination tray stacks may be assembled, in various embodiments, according to invoice, manifest, order, or other information. For example, for each of a plurality of physical destinations (e.g., retail stores), a destination stack associated with that destination (e.g., according to an order placed by the destination) is built by selecting trays from respective source tray stacks 102, 104 and stacking them on a corresponding destination tray stack 120, 122. Completed destination tray stacks 120, 122 may be removed from the destination tray stack assembly area, as indicated by arrow 124, e.g., to be place on trucks, rail cars, containers, etc. for delivery to a further destination, such as a retail store.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Sun to further incorporate the teachings of Lovett of wherein the processor is further configured to determine, using the arrangement, stacking information indicating how a plurality of objects are stacked on top of each other in the storage box; and to generate the control instructions using the stacking information in order to generate/identify a plan to stack items according to various properties. [(Lovett 135)]
Regarding claim 15, Sun has all of the elements of claim 14 as discussed above.
Sun does not explicitly teach further comprising: determining stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generating the control instructions using the stacking information.
However, Lovett teaches further comprising: determining stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generating the control instructions using the stacking information. [(see at least paragraph 50) “The respective robots 112, 114 are operated at the same time, fully autonomously, to pick trays from source tray stacks 102, 104 and place them on destination tray stacks, such as destination tray stacks 120, 122, in a destination tray stack assembly area on an opposite side of rail 110 from conveyance 106 and source tray stacks 102, 104. The destination tray stacks may be assembled, in various embodiments, according to invoice, manifest, order, or other information. For example, for each of a plurality of physical destinations (e.g., retail stores), a destination stack associated with that destination (e.g., according to an order placed by the destination) is built by selecting trays from respective source tray stacks 102, 104 and stacking them on a corresponding destination tray stack 120, 122. Completed destination tray stacks 120, 122 may be removed from the destination tray stack assembly area, as indicated by arrow 124, e.g., to be place on trucks, rail cars, containers, etc. for delivery to a further destination, such as a retail store.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Sun to incorporate the teachings of Lovett of determining stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generating the control instructions using the stacking information in order to generate/identify a plan to stack items according to various properties. [(Lovett 135)]
Regarding claim 16, Modified Sun has all of the elements of claim 15 as discussed above.
Sun does not explicitly teach further comprising determining, using the arrangement, stacking information indicating how a plurality of objects are stacked on top of each other in the storage box; and generating the control instructions using the stacking information.
However, Lovett teaches further comprising determining, using the arrangement, stacking information indicating how a plurality of objects are stacked on top of each other in the storage box; and generating the control instructions using the stacking information. [(see at least paragraphs 50-57) As in 50 “The respective robots 112, 114 are operated at the same time, fully autonomously, to pick trays from source tray stacks 102, 104 and place them on destination tray stacks, such as destination tray stacks 120, 122, in a destination tray stack assembly area on an opposite side of rail 110 from conveyance 106 and source tray stacks 102, 104. The destination tray stacks may be assembled, in various embodiments, according to invoice, manifest, order, or other information. For example, for each of a plurality of physical destinations (e.g., retail stores), a destination stack associated with that destination (e.g., according to an order placed by the destination) is built by selecting trays from respective source tray stacks 102, 104 and stacking them on a corresponding destination tray stack 120, 122. Completed destination tray stacks 120, 122 may be removed from the destination tray stack assembly area, as indicated by arrow 124, e.g., to be place on trucks, rail cars, containers, etc. for delivery to a further destination, such as a retail store.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Sun to further incorporate the teachings of Lovett of determining, using the arrangement, stacking information indicating how a plurality of objects are stacked on top of each other in the storage box; and generating the control instructions using the stacking information in order to generate/identify a plan to stack items according to various properties. [(Lovett 135)]
Regarding claim 20, Sun has all of the elements of claim 19 as discussed above.
Sun does not explicitly teach wherein the instructions are further configured to cause the processor to: determine stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generate the control instructions using the stacking information.
However, Lovett teaches wherein the instructions are further configured to cause the processor to: determine stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generate the control instructions using the stacking information. [(see at least paragraphs 50-57) As in 50 “The respective robots 112, 114 are operated at the same time, fully autonomously, to pick trays from source tray stacks 102, 104 and place them on destination tray stacks, such as destination tray stacks 120, 122, in a destination tray stack assembly area on an opposite side of rail 110 from conveyance 106 and source tray stacks 102, 104. The destination tray stacks may be assembled, in various embodiments, according to invoice, manifest, order, or other information. For example, for each of a plurality of physical destinations (e.g., retail stores), a destination stack associated with that destination (e.g., according to an order placed by the destination) is built by selecting trays from respective source tray stacks 102, 104 and stacking them on a corresponding destination tray stack 120, 122. Completed destination tray stacks 120, 122 may be removed from the destination tray stack assembly area, as indicated by arrow 124, e.g., to be place on trucks, rail cars, containers, etc. for delivery to a further destination, such as a retail store.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Sun to incorporate the teachings of Lovett of wherein the instructions are further configured to cause the processor to: determine stacking information using the arrangement comprising the position data, the stacking information indicating how the multiple objects are stacked on each other in the storage box; and generate the control instructions using the stacking information in order to generate/identify a plan to stack items according to various properties. [(Lovett 135)]
Claims 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of D’Amelio (US 2024/0157561 A1).
Regarding claim 6, Sun teaches wherein the processor is configured to: determine a packing plan for packing the multiple objects into the storage box, wherein the packing plan indicates an order in which the multiple objects are to be packed into the storage box and a respective position for each object of the multiple objects [(see paragraph 31) “In various embodiments, singulation and/or sortation is performed based at least in part on detecting a state or condition associated with one or more items in the workspace and performing an active measure to adapt to the state or condition in connection with picking an item from a source pile/flow (e.g., a workspace) and placing the item on a segmented conveyor or similar conveyance to be sorted and routed for transport to a downstream (e.g., ultimate addressed/physical) destination. In some embodiments, the robotic system determines a plan to singulate an item (e.g., to pick the item from the workspace and place the item on a singulation conveyance structure), and performs the active measure in response to determining the detected state or condition after the plan was initially determined. In some embodiments, multiple robots are coordinated to maximize collective throughput. For example, the robotic system includes a plurality of robotic arms at the same workspace and the plurality of robotic arms operate to pick a plurality of items from the source pile/flow and place the items on the singulation conveyance structure. The plurality of robotic arms may operate autonomously and independently.”]
Sun does not explicitly teach wherein a prediction of a packing time duration required to pack the multiple objects into the storage box and a prediction of an unpacking time duration required to unpack the multiple objects from the storage box are taken into account when determining the packing plan.
However, D’Amelio teaches wherein a prediction of a packing time duration required to pack the multiple objects into the storage box and a prediction of an unpacking time duration required to unpack the multiple objects from the storage box are taken into account when determining the packing plan. [(see at least paragraph 250) “As indicated, using the models developed herein, successfully packing a container for shipment with increased efficiency and decreased time may result in greater than about 90%, greater than about 95%, greater than about 98% void utilization and placement of over 1000, 2000, or even about 2500 parcels per hour or two.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Sun to incorporate the teachings of D’Amelio of a prediction of a packing time duration required to pack the multiple objects into the storage box and a prediction of an unpacking time duration required to unpack the multiple objects from the storage box are taken into account when determining the packing plan in order to successfully pack a container/box for shipment with increased efficiency and decreased time. [(D’Amelio 250)]
Regarding claim 7, In view of the above combination of references, Sun further teaches wherein the processor is configured to: store the packing plan in the memory device; and generate the control instructions for controlling the robotic device using the packing plan. [(see at least paragraph 249) “According to various embodiments, the robotic singulation station schedulers 924, 926, 928, and 930 register with global scheduler 922 plans or strategies for operating corresponding robots to singulate items, or otherwise store such plans or strategies in a storage location that is accessible to global scheduler 922. The robotic singulation station schedulers 924, 926, 928, and 930 can independently determine the plans or strategies for operating corresponding robots to singulate items. In some embodiments, although the robotic singulation station schedulers 924, 926, 928, and 930 operate independently to determine their respective plans or strategies, the robotic singulation station schedulers 924, 926, 928, and 930 determine their respective plans or strategies at different times (e.g., so that a same item is not selected for singulation by two robots, etc.). In some embodiments, the robotic singulation station schedulers 924, 926, 928, and 930 operate independently to determine their respective plans or strategies, and the robotic singulation station schedulers 924, 926, 928, and 930 register with their respective plans or strategies global scheduler 922 at different times, and global scheduler 922 can send a fault to a robotic singulation station scheduler if during registration of its plan or strategy global scheduler 922 that such plan or strategy conflicts with an existing registered plan or strategy.”]
Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Deyle (US 2020/0061839 A1).
Regarding claim 8, Sun has all of the elements of claim 1 as discussed above.
Sun does not explicitly teach wherein the processor is configured to determine, based on first box information associated with the storage box and second box information, whether one or more than one object of the multiple objects is damaged.
However, Deyle teaches wherein the processor is configured to determine, based on first box information associated with the storage box and second box information, whether one or more than one object of the multiple objects is damaged. [(see at least paragraph 341) “In some embodiments, the mobile robot 2710 uses the camera 2720 to perform quality control operations on items in the retail environment 2700. The mobile robot 2710 can identify damaged items that are on display. For example, the mobile robot 2710 detects defects such as opened packaging, crushed boxes, dirty items, missing parts, or other damage conditions.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Sun to incorporate the teachings of Deyle of wherein the processor is configured to determine, based on first box information associated with the storage box and second box information, whether one or more than one object of the multiple objects is damaged in order to determine that a condition of an item differs from the other items and/or remove the damaged item. [(Deyle 341)]
Claims 9-10 are rejected under 35 U.S.C. 103 as being unpatentable over Sun in view of Marseglia (US 2023/0415943 A1).
Regarding claim 9, Sun has all of the elements of claim 1 as discussed above.
Sun does not explicitly teach wherein the second box information comprises multiple predicted objects; and wherein the processor is configured to assign an object of the multiple objects to each predicted object of the multiple predicted objects
However, Marseglia teaches wherein the second box information comprises multiple predicted objects; and wherein the processor is configured to assign an object of the multiple objects to each predicted object of the multiple predicted objects. [(see at least paragraph 111) “the present invention provides a system for packaging product items (e.g., cannabis buds) of a same type into individual containers, each container having a predetermined loaded target weight. The system includes a product item pick and place robot having an arm with a free end including individual product item pickers; a product item picking tray adapted to receive and present for picking by the individual product item pickers, a number of product items. The system further includes a computer vision system electrically coupled to the product item pick and place robot, the computer vision system having imaging sensor(s) (e.g., as part of a camera) operable to view individual product items on the product item picking tray and predicting weights thereof. The computer vision system, based on the predicted weights of the product items on the product item picking tray and identified in the digital image (within the field of view), is operable to direct the product item pick and place robot to pick product item(s) from the product item picking tray using the individual product item picker(s), respectively, and depositing the picked product item(s), e.g., one at a time, and in an amount to load a container to its predetermined loaded target weight.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Sun to incorporate the teachings of Marseglia of wherein the second box information comprises multiple predicted objects; and wherein the processor is configured to assign an object of the multiple objects to each predicted object of the multiple predicted objects in order to collectively weigh/identify, within a predetermined tolerance, the predetermined loaded target weight of the container/box to be filled. [(Marseglia 114)]
Regarding claim 10, Modified Sun has all of the elements of claim 9 as discussed above.
Sun does not explicitly teach wherein the arrangement comprises respective position data for each object of the multiple objects, wherein the respective position data indicate a position of the object in the storage box; wherein the second box information further comprise respective predicted position data for each predicted object of the multiple predicted objects, the respective predicted position data indicated a predicted position of the predicted object in the storage box; and wherein the processor is configured to assign an object of the multiple objects to each predicted object of the multiple predicted objects using a comparison of the position data with the predicted position data.
However, Marseglia teaches wherein the arrangement comprises respective position data for each object of the multiple objects, wherein the respective position data indicate a position of the object in the storage box; wherein the second box information further comprise respective predicted position data for each predicted object of the multiple predicted objects, the respective predicted position data indicated a predicted position of the predicted object in the storage box; and wherein the processor is configured to assign an object of the multiple objects to each predicted object of the multiple predicted objects using a comparison of the position data with the predicted position data. [(see at least paragraphs 111-115) As in 111 “The system further includes a computer vision system electrically coupled to the product item pick and place robot, the computer vision system having imaging sensor(s) (e.g., as part of a camera) operable to view individual product items on the product item picking tray and predicting weights thereof. The computer vision system, based on the predicted weights of the product items on the product item picking tray and identified in the digital image (within the field of view), is operable to direct the product item pick and place robot to pick product item(s) from the product item picking tray using the individual product item picker(s), respectively, and depositing the picked product item(s), e.g., one at a time, and in an amount to load a container to its predetermined loaded target weight.” As in 114 “the computer vision system operable to cause the arm to rotate the picker head so as to position a desired one of the individual product item pickers above a product item selected by the computer vision system to be picked from the product item picking tray based on the predicted weight of the product item as determined by the computer vision system. In one example, the computer vision system may be operable to detect when the individual product item pickers have picked the product item(s) that collectively weigh, within a predetermined tolerance, the predetermined loaded target weight of the container to be filled. In another example, the computer vision system may be operable to instruct and cause the product item pick and place robot to deposit the picked product item(s) into the container to be filled. In still another example, the computer vision system may be operable to instruct and cause the product item pick and place robot to deposit the picked product item(s), for example, one at a time, into the container to be filled. In yet another example, the computer vision system is operable to instruct and cause the product item pick and place robot and the picking heads to deposit the picked product item(s), e.g., one at a time, into the container to be filled in descending weight order.”]
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of modified Sun to further incorporate the teachings of Marseglia of wherein the arrangement comprises respective position data for each object of the multiple objects, wherein the respective position data indicate a position of the object in the storage box; wherein the second box information further comprise respective predicted position data for each predicted object of the multiple predicted objects, the respective predicted position data indicated a predicted position of the predicted object in the storage box; and wherein the processor is configured to assign an object of the multiple objects to each predicted object of the multiple predicted objects using a comparison of the position data with the predicted position data in order to instruct and cause the product item pick and place robot to deposit the picked product items into the container to be filled. [(Marseglia 114)]
The Examiner has cited particular paragraphs or columns and line numbers in the references applied to the claims above for the convenience of the Applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested of the Applicant in preparing responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. See MPEP 2141.02 [R-07.2015] VI. A prior art reference must be considered in its entirety, i.e., as a whole, including portions that would lead away from the claimed Invention. W.L. Gore & Associates, Inc. v. Garlock, Inc., 721 F.2d 1540, 220 USPQ 303 (Fed. Cir. 1983), cert, denied, 469 U.S. 851 (1984). See also MPEP §2123.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
(US 2025/0083316 A1) Ichien - CONTROL DEVICE, CONTROL METHOD, AND STORAGE MEDIUM
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMED YOUSEF ABUELHAWA whose telephone number is (571)272-3219. The examiner can normally be reached Monday-Friday 8:30-5:00 with flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wade Miles can be reached at 571-270-7777. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMMED YOUSEF ABUELHAWA/Examiner, Art Unit 3656
/WADE MILES/Supervisory Patent Examiner, Art Unit 3656