DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 10/16/25 have been fully considered but they are not persuasive.
Regarding claim 1, Applicant states that the references of Swope, Tomochika and Moed, in combination fail to teach: detecting, by a device and based on an image, an object other than an adhesive on a conveyor; and printing, by the device and via al label printer, a label associated with the object; (Applicants Remarks page 8-9). In particular, Applicant states that the combination of Swope and Moed would not optimize production as motivation to combine (Applicants Remarks pages 10-12). Examiner disagrees with Applicant.
Although Moed includes a label, Moed is relied on to teach detecting a package on a conveyor belt and Moed et al teach detecting, by a device and based on an image, an object on a conveyor other than the adhesive (system 10 is operative for capturing an image of a package as it travels on a conveyor belt, and using the image to detect and decode a bar code and OCR address data that appears on the package (column 10, lines 11-24).
Therefore, in combination, Swope can detect a package on a conveyor belt as taught by Moed and print and apply a printed label onto the detected package. This would optimize production by detecting the desired package and printing label onto the package, instead of printing a label and finding the desired package to affix the label. The rejection is maintained.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 4-6, 8, 9, 11-15, 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Swope et al US 2022/0297869 in view of Moed et al US 5770841 further in view of Tomochika et al US 2021/0012524.
Regarding claim 1, Swope et al teaches a method, comprising:
printing, by the device and via a label printer, a label associated with the object (robotic device may print the label. For example, based on receiving the label information, the controller may cause the printing device to print content to the label (paragraph 0021);
moving, by the device and via a robotic arm, the label printer into an application position that aligns with a path of the object position on the conveyor (the robotic device may print the label and cause the printing device to print content to the label by configuring and positioning to output the label (with the printed content) to an adhesive application zone of the robotic device (paragraph 0021). robotic device may convey an adhesive into the adhesive application zone. For example, the controller may cause an adhesive conveyor to move the adhesive into the adhesive application zone based on the label being removed from the adhesive application zone (paragraph 0025); and
applying, by the device and via the robotic arm and the label printer, the label to the object at the application position (robotic device may convey an adhesive into the adhesive application zone. For example, the controller may cause an adhesive conveyor to move the adhesive into the adhesive application zone based on the label being removed from the adhesive application zone (paragraph 0025)
Swope et al teaches of robotic arm attached to an adhesive label application system that comprises a label printer (fig 1 as shown below). The robotic arm controlled by the label printer is connected by a robotic arm, by a coupler and electromechanical coupling of the label printer. Since robotic arms are conventional electromechanical devices (See Abdul Jabbar et al US 20220063194), the robotic arm would have to be attached to a label printer using a coupler and electromechanical coupling
PNG
media_image1.png
398
307
media_image1.png
Greyscale
Swope et al fails to teach detecting, by a device and based on an image, an object on a conveyor
Moed et al teach detecting, by a device and based on an image, an object on a conveyor other than the adhesive (system 10 is operative for capturing an image of a package as it travels on a conveyor belt, and detecting and decoding a bar code and OCR address data that appear on the package (column 10, lines 11-24)
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al with detecting, by a device and based on an image, an object on a conveyor.
The reason of doing so would be to optimize production.
Swope et al in view of Moed et al fails to teach fails to teach determine, using the image processing model, an object position of the object relative to the conveyor.
Tomochika et al 20210012524 teaches determine, using the image processing model, an object position of the object relative to the conveyor (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). The training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051);
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al with determine, using the image processing model, an object position of the object relative to the conveyor.
The reason of doing so would be to optimize production and full automation accuracy.
Regarding claim 4, Swope et al in view of Moed et al fails to teach the limitations of claim 1
Tomochika et al teaches wherein the object is detected using an image processing model that is configured to identify the object on the conveyor as depicted in the image (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). the training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051).
Therefore, the motivation to combine Swope et al with Tomochika et al is the same as the motivation used in claim 1.)
Regarding claim 5, Swope et al teaches comprising: analyzing, using an image processing model, the object to identify a label receiving area on the object that is to receive the label, wherein the label is applied to the label receiving area based on controlling the robotic arm to move the label printer across the label receiving area (based on receiving the label information, the robotic device may navigate to the receiving surface and/or position the robotic device at a location that permits the robotic device to apply an adhesive label to the receiving surface (paragraph 0020).
Regarding claim 6, Swope et al teaches wherein the label receiving area is identified based on a size of the label (the length of the adhesive-carrying tape that the controller moves through the adhesive application zone may be based on a size of the label (paragraph 0026).
Regarding claim 8, Swope et al teaches wherein moving the label printer into the application position comprises: causing the robotic arm to remove the label printer from a docking station controlling the robotic arm, wherein the docking station is configured to supply power to the label printer when the label printer is docked at the docking station, and wherein the label printer is to receive power from the robotic arm based on being attached to the robotic arm (grasping instrument 208 may be configured to grasp a label as the label is printed and/or after the label is printed and provided to an adhesive application zone 210 (e.g., an adhesive application zone described elsewhere herein) of the arrangement 200. The grasping instrument 208 may be communicatively coupled to a positioning instrument (paragraph 0036). Although Swope et al does not explicitly teach of supplying power to the robotic arm that comprises a label printer, KSR it would be obvious to try that the adhesive label system, in which a printing device (docking station) to supply power to the robotic arm that comprises a printer (see rejection of cl 1).
Regarding claim 9, Swope et al 20220297869 teaches a device (device 400 (paragraph 0049), comprising:
one or more memories (memory 430 (paragraph 0049); and
one or more processors (processor 420 (paragraph 0049), coupled to the one or more memories (fig 4), configured to:
print, via a label printer and a label associated with the object (robotic device may print the label. For example, based on receiving the label information, the controller may cause the printing device to print content to the label (paragraph 0021);
move, a label printer into an application position that aligns with a path of the object position on the conveyor (the robotic device may print the label and cause the printing device to print content to the label by configuring and positioning to output the label (with the printed content) to an adhesive application zone of the robotic device (paragraph 0021). robotic device may convey an adhesive into the adhesive application zone. For example, the controller may cause an adhesive conveyor to move the adhesive into the adhesive application zone based on the label being removed from the adhesive application zone (paragraph 0025)
Swope et al teaches of robotic arm attached to an adhesive label application system that comprises a label printer (fig 1). The robotic arm controlled by the label printer is connected by a robotic arm, by a coupler and electromechanical coupling of the label printer. Since robotic arms are conventional electromechanical devices (See Abdul Jabbar et al US 20220063194), the robotic arm would have to be attached to a label printer using a coupler and electromechanical coupling
PNG
media_image1.png
398
307
media_image1.png
Greyscale
Swope et al fails to teach receive, from a camera, an image that depicts an object on a conveyor;
Moed et al teaches receive, from a camera, an image that depicts an object on a conveyor other than an adhesive (system 10 is operative for capturing an image of a package as it travels on a conveyor belt, and detecting and decoding a bar code and OCR address data that appear on the package (column 10, lines 11-24)
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al with receive, from a camera, an image that depicts an object on a conveyor.
The reason of doing so would be to optimize production.
Swope et al in view of Moed et al fails to teach based on detecting the object in the image using an image processing model
Tomochika et al 20210012524 teaches based on detecting the object in the image using an image processing model (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). the training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051);
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al with based on detecting the object in the image using an image processing model.
The reason of doing so would be to optimize production and full automation accuracy.
Regarding claim 11, Swope et al teaches wherein the one or more processors are further configured to: analyze, the object to identify a label receiving area on the object that is to receive the label, wherein the robotic arm is controlled to apply the label to the label receiving area on the object (based on receiving the label information, the robotic device may navigate to the receiving surface and/or position the robotic device at a location that permits the robotic device to apply an adhesive label to the receiving surface (paragraph 0020).
Swope et al fails to teach image processing model and an object position of the object relative to the conveyor
Tomochika et al 20210012524 teaches image processing model (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). the training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051);
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al with image processing model.
The reason of doing so would be to optimize production and full automation accuracy.
Regarding claim 12, Swope et al teaches wherein the label receiving area is identified based on a size of the label (the length of the adhesive-carrying tape that the controller moves through the adhesive application zone may be based on a size of the label (paragraph 0026).
Regarding claim 13, Swope et al teaches wherein the electromechanical coupling is configured to transfer electrical power from the robotic arm to one or more components of the label printer (Since robotic arms are conventional electromechanical devices (See Abdul Jabbar et al US 20220063194), the robotic arm would have to be attached to a label printer using a coupler and electromechanical coupling that would be able to transfer electrical power from the robotic arm to one or more components of the label printer)
Regarding claim 14, Swope et al teaches wherein the one or more processors, to cause the robotic arm to move the label printer into the application position, are configured to: cause the robotic arm to remove the label printer from a docking station to the application position, wherein the docking station is configured to supply power to the label printer prior to the label printer being coupled to the robotic arm (grasping instrument 208 may be configured to grasp a label as the label is printed and/or after the label is printed and provided to an adhesive application zone 210 (e.g., an adhesive application zone described elsewhere herein) of the arrangement 200. The grasping instrument 208 may be communicatively coupled to a positioning instrument (paragraph 0036). Although Swope et al does not explicitly teach of supplying power to the robotic arm that comprises a label printer, KSR it would be obvious to try that the adhesive label system, in which a printing device (docking station) to supply power to the robotic arm that comprises a printer (see rejection of cl 9)
Regarding claim 15, Swope et al teaches a system, comprising:
a label printer (robotic device may print the label. For example, based on receiving the label information, the controller may cause the printing device to print content to the label (paragraph 0021);
a camera (controller may use a camera (or another type of sensor) to determine or identify the location of the receiving surface (paragraph 0032);
a robotic arm (fig 1); and
a controller (fig 1) configured to:
print, via a label printer and based on detecting the object in the image using an image processing model, a label associated with the object (robotic device may print the label. For example, based on receiving the label information, the controller may cause the printing device to print content to the label (paragraph 0021);
move, via a robotic arm, a label printer into an application position that aligns with a path of the object position on the conveyor (the robotic device may print the label and cause the printing device to print content to the label by configuring and positioning to output the label (with the printed content) to an adhesive application zone of the robotic device (paragraph 0021). robotic device may convey an adhesive into the adhesive application zone. For example, the controller may cause an adhesive conveyor to move the adhesive into the adhesive application zone based on the label being removed from the adhesive application zone (paragraph 0025); and
apply, via the robotic arm and the label printer, the label to the object at the application position (robotic device may convey an adhesive into the adhesive application zone. For example, the controller may cause an adhesive conveyor to move the adhesive into the adhesive application zone based on the label being removed from the adhesive application zone (paragraph 0025)
Swope et al teaches of robotic arm attached to an adhesive label application system that comprises a label printer (fig 1). The robotic arm controlled by the label printer is connected by a robotic arm, by a coupler and electromechanical coupling of the label printer. Since robotic arms are conventional electromechanical devices (See Abdul Jabbar et al US 20220063194), the robotic arm would have to be attached to a label printer using a coupler and electromechanical coupling
PNG
media_image1.png
398
307
media_image1.png
Greyscale
Swope et al fails to teach receive, from a camera, an image that depicts an object on a conveyor other than adhesive;
Moed et al teaches receive, from a camera, an image that depicts an object on a conveyor other than adhesive (system 10 is operative for capturing an image of a package as it travels on a conveyor belt, and detecting and decoding a bar code and OCR address data that appear on the package (column 10, lines 11-24)
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al with receive, from a camera, an image that depicts an object on a conveyor.
The reason of doing so would be to optimize production and full automation accuracy.
Swope et al in view of Moed et al fails to teach detect the object in the image using an image processing model;
determine, using the image processing model, an object position of the object relative to the conveyor;
Tomochika et al 20210012524 teaches detect the object in the image using an image processing model (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). the training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051);
determine, using the image processing model, an object position of the object relative to the conveyor (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). the training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051);
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al with determine, using the image processing model, an object position of the object relative to the conveyor.
The reason of doing so would be to optimize production and full automation accuraccy.
Regarding claim 17, Swope et al teaches wherein the controller is further configured to: analyze, the object to identify a label receiving area on the object that is to receive the label, wherein the robotic arm is controlled to apply the label to the label receiving area on the object (based on receiving the label information, the robotic device may navigate to the receiving surface and/or position the robotic device at a location that permits the robotic device to apply an adhesive label to the receiving surface (paragraph 0020).
Swope et al in view of Moed et al fails to teach image processing model and an object position of the object relative to the conveyor
Tomochika et al 20210012524 teaches image processing model (captured image itself used to generate the training dataset, machine learning can be performed using an imaging image close to the imaging image to be obtained in an actual environment wherein a learnt model is utilized. Machine learning can be performed using a captured image close to the image. (paragraph 0027), determine, using the image processing model, an object position of the object relative to the conveyor (a belt conveyor that transports an object with a transport belt is preferably used while the object is placed on a training dataset generation jig (paragraph 0049). the training dataset generation jig of the present invention is used in the training dataset generation method of the present invention, and as described above, is configured with an area to be a guide of placement position of the object (paragraph 0051);
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al with image processing model.
The reason of doing so would be to optimize production and full automation accuracy.
Regarding claim 18, Swope et al teaches wherein the label receiving area is identified based on a size of the label (the length of the adhesive-carrying tape that the controller moves through the adhesive application zone may be based on a size of the label (paragraph 0026).
Regarding claim 19, Swope et al teaches wherein the electromechanical coupling is configured to transfer electrical power from the robotic arm to one or more components of the label printer (Since robotic arms are conventional electromechanical devices (See Abdul Jabbar et al US 20220063194), the robotic arm would have to be attached to a label printer using a coupler and electromechanical coupling that would be able to transfer electrical power from the robotic arm to one or more components of the label printer)
Regarding claim 20, Swope et al teaches wherein the controller, to move the label printer into the application position, is configured to: cause the robotic arm to remove the label printer from a docking station to the application position, wherein the docking station is configured to supply power to the label printer prior to the label printer being coupled to the robotic arm (grasping instrument 208 may be configured to grasp a label as the label is printed and/or after the label is printed and provided to an adhesive application zone 210 (e.g., an adhesive application zone described elsewhere herein) of the arrangement 200. The grasping instrument 208 may be communicatively coupled to a positioning instrument (paragraph 0036). Although Swope et al does not explicitly teach of supplying power to the robotic arm that comprises a label printer, KSR it would be obvious to try that the adhesive label system, in which a printing device (docking station) to supply power to the robotic arm that comprises a printer (see rejection of cl 15)
Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Swope et al US 2022/0297869 in view of Moed et al US 5770841 further in view of Tomochika et al US 2021/0012524 further in view of Sato US 20200371722.
Regarding claim 2, Swope et al in view of Moed et al further in view of Tomochika et al teaches all of the limitations of claim 1
Swope et al teaches devices of environment 300 may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections (devices of environment 300 may interconnect via wired connections, wireless connections, or a combination of wired and wireless connections (paragraph 0042)
Swope et al in view of Moed et al further in view of Tomochika et al fails to teach wherein the label printer is selected, from a plurality of label printers, to apply the label based on a print instruction associated with the object and the label.
Sato teaches wherein the label printer is selected, from a plurality of label printers, to apply the label based on a print instruction associated with the object and the label (label printer 2 and the printer 3 may be a plurality of printers. The mobile device 1 to which a plurality of label printers in which types of the set label papers are different is connected, an appropriate label printer is selected in accordance with a label object to be printed (paragraph 0089)
Because the device environment of Swope et al maybe connected wirelessly, it would be obvious to connect the environment 300 of Swope et al with the wireless connectivity of mobile device 1 with the plurality of printer labels of Sato.
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified the wireless connection of Swope et al in view of Moed et al further in view of Tomochika et al wherein the label printer is selected, from a plurality of label printers, to apply the label based on a print instruction associated with the object and the label.
The reason of doing so would be to select a printer that will provide a desired type of label.
Claim(s) 3, 10 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Swope et al US 2022/0297869 in view of Moed et al US 5770841 further in view of Tomochika et al US 2021/0012524 further in view of Sato US 20200371722 further in view of Christman et al US 2019/0212956.
Regarding claim 3, Swope et al in view of Moed et al further in view of Tomochika et al further in view of Sato teaches all of the limitations of claim 2
Swope et al teaches based on receiving the label information, the controller may cause the printing device to print content to the label (paragraph 0021)
Swope et al in view of Moed et al further in view of Tomochika et al further in view of Sato fails to teach wherein the label is printed to include content that is associated with an object identifier that is indicated on the object;
Christman et al teaches wherein the label is printed to include content that is associated with an object identifier that is indicated on the object (scan data may include an identifier of the object (e.g., a unique identifier). The computing device determines whether shipment data for the object is in the shipping database and then determines shipping information for the object based on whether shipment data for the object is in the shipping database. The computing device sends the shipping information to the label printer. The label printer prints a shipping label for the object based on the shipping information (paragraph 0026).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al further in view of Tomochika et al further in view of Sato wherein the label is printed to include content that is associated with an object identifier that is indicated on the object.
The reason of doing so would be to identify an item based on information printed on the label.
Regarding claim 10, Swope et al in view of Moed et al further in view of Tomochika et al teaches all of the limitations of claim 9
Swope et al in view of Moed et al further in view of Tomochika et al fails to teach wherein the label printer is selected, from a plurality of label printers, based on being associated with the object identifier.
Sato teaches wherein the label printer is selected, from a plurality of label printers, based on being associated with the object identifier (label printer 2 and the printer 3 may be a plurality of printers. The mobile device 1 to which a plurality of label printers in which types of the set label papers are different is connected, an appropriate label printer is selected in accordance with a label object to be printed (paragraph 0089)
Because the device environment of Swope et al maybe connected wirelessly, it would be obvious to connect the environment 300 of Swope et al with the wireless connectivity of mobile device 1 with the plurality of printer labels of Sato. Therefore, it would have been obvious to a person with ordinary skill in the art to have modified the wireless connection of Swope et al in view of Moed et al further in view of Tomochika et al wherein the label printer is selected, from a plurality of label printers, to apply the label based on a print instruction associated with the object and the label.
The reason of doing so would be to select a printer that will provide a desired type of label.
Swope et al in view of Moed et al further in view of Tomochika et al further in view of Sato fails to teach wherein the one or more processors are further configured to: process the image to identify an object identifier associated with the object,
Christman et al teaches wherein the one or more processors are further configured to: process the image to identify an object identifier associated with the object (scan data may include an identifier of the object (e.g., a unique identifier). The computing device determines whether shipment data for the object is in the shipping database and then determines shipping information for the object based on whether shipment data for the object is in the shipping database. The computing device sends the shipping information to the label printer. The label printer prints a shipping label for the object based on the shipping information (paragraph 0026).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Moed et al further in view of Tomochika et al further in view of Sato wherein the one or more processors are further configured to: process the image to identify an object identifier associated with the object.
The reason of doing so would be to identify an item based on information printed on the label.
Regarding claim 16, Swope et al in view of Moed et al further in view of Tomochika et al teaches all of the limitations of claim 15,
Swope et al in view of Moed et al further in view of Tomochika et al fails to teach wherein the controller is further configured to: identify, using the image processing model, an object identifier depicted in the image,
Sato teaches wherein the label printer is selected based on being associated with the object identifier (label printer 2 and the printer 3 may be a plurality of printers. The mobile device 1 to which a plurality of label printers in which types of the set label papers are different is connected, an appropriate label printer is selected in accordance with a label object to be printed (paragraph 0089)
Because the device environment of Swope et al maybe connected wirelessly, it would be obvious to connect the environment 300 of Swope et al with the wireless connectivity of mobile device 1 with the plurality of printer labels of Sato. Therefore, it would have been obvious to a person with ordinary skill in the art to have modified the wireless connection of Swope et al in view of Moed et al further in view of Tomochika et al wherein the controller is further configured to: identify, using the image processing model, an object identifier depicted in the image.
The reason of doing so would be to select a printer that will provide a desired type of label.
Swope et al in view of Moed et al further in view of Tomochika et al further in view of Sato fails to teach wherein the controller is further configured to: identify, using the image processing model, an object identifier depicted in the image
Christman et al teaches wherein the controller is further configured to: identify, using the image processing model, an object identifier depicted in the image(scan data may include an identifier of the object (e.g., a unique identifier). The computing device determines whether shipment data for the object is in the shipping database and then determines shipping information for the object based on whether shipment data for the object is in the shipping database. The computing device sends the shipping information to the label printer. The label printer prints a shipping label for the object based on the shipping information (paragraph 0026).
Therefore, it would have been obvious to a person with ordinary skill in the art to have modified Swope et al in view of Tomochika et al further in view of Sato wherein the controller is further configured to: identify, using the image processing model, an object identifier depicted in the image.
The reason of doing so would be to identify an item based on information printed on the label.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication should be directed to Michael Burleson whose telephone number is (571) 272-7460 and fax number is (571) 273-7460. The examiner can normally be reached Monday thru Friday from 8:00 a.m. – 4:30p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Akwasi Sarpong can be reached at (571) 270- 3438.
Michael Burleson
Patent Examiner
Art Unit 2681
Michael Burleson
January 21, 2026
/MICHAEL BURLESON/
/AKWASI M SARPONG/SPE, Art Unit 2681 1/26/2026