Prosecution Insights
Last updated: April 19, 2026
Application No. 18/559,509

METHOD FOR OPERATING A LABELLING SYSTEM

Non-Final OA §103§112
Filed
Nov 07, 2023
Examiner
KOCH, GEORGE R
Art Unit
1745
Tech Center
1700 — Chemical & Materials Engineering
Assignee
Espera-Werke GmbH
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
90%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
781 granted / 1075 resolved
+7.7% vs TC avg
Strong +18% interview lift
Without
With
+17.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
44 currently pending
Career history
1119
Total Applications
across all art units

Statute-Specific Performance

§101
0.3%
-39.7% vs TC avg
§103
53.6%
+13.6% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
17.1%
-22.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1075 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of group I, claims 17-40 in the reply filed on 11/18/2025 is acknowledged. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “feed arrangement” in claim 17, 25, 36, 37, 38, 40. The specification discloses that the corresponding structure sufficient to perform the recited function is a belt or roller conveyor, reciting that “The feed arrangement is in particular a belt conveyor or a roller conveyor for moving the respective packages, it being possible to carry out labeling of the moved packages in ongoing operation.” “label dispensing arrangement” in claim 17, 22, 40. The specification discloses that the corresponding structure sufficient to perform the recited function is equipped with a plurality of label strips and can include a material guide with either an dispensing edge or an adhesive free equivalent, disclosing that “Preferably, the label is detached from a material strip 9 by means of the label dispensing arrangement 5.” The specification also discloses that “The material strip 9 is in this case guided over a dispensing edge 10 so that the labels are detached. It is likewise also possible to envision the use of adhesive- free labels, which are only subsequently provided with an adhesive face or are affixed onto an adhesive face on the respective package 3.” The disclosure of the strip being guided over a dispensing edge in combination with the visual description in Figure 1 is sufficient structure to perform the recited function as it discloses a material guide with a dispensing edge and/or an adhesive-free equivalent. “label affixing arrangement” in claim 17, 23, 40. The specification discloses that the corresponding structure to perform the recited function as a stamp, disclosing that “preferably the label affixing arrangement 6 has a stamp 11 for affixing a label onto the upper side of the package 3”. The specification discloses that the corresponding structure to perform the recited function as “alternatively, it is conceivable for the label to be affixed contactlessly, for example by a suction and blowing foot of the stamp 11 ejecting, that is to say pneumatically affixing, the label onto the package 3 by generating a compressed-air impulse directed toward the package 3” “printer arrangement” in claim 17, 19, 21, 40. The specification discloses that the corresponding structure to perform the recited function as a laser or inkjet printer, disclosing that “may likewise have a laser printer and/or inkjet printer.” “control arrangement” in claim 17, 33, 39, 40. The specification discloses that the corresponding structure to perform the recited function as a computer or a cloud based server or a mobile device, disclosing that “The control arrangement 8 oversees the control-technology tasks occurring in the labeling routine. Preferably, the control arrangement 8 has at least one computer device, which is adapted to drive the functional units. Fig. 1 shows by way of example a local control unit 13 of the labeling apparatus 2, which communicates with a cloud-based server 14 via a wired and/or wireless network, for example a local network, a mobile communications network and/or the Internet. A mobile device 15, which likewise communicates with the further components of the control arrangement 8 via the network, is furthermore provided. Other variants of the control arrangement 8 are conceivable. For example, alternatively to the control arrangement 8 represented, having a plurality of components, merely a local control arrangement 8 may also be provided on the labeling apparatus 2.” “sensor arrangement” in claim 17, 34, 38, 40. The specification discloses that the corresponding structure to perform the recited function is a camera, IR sensor or laser scanner, disclosing that “The labeling apparatus 2 furthermore has a sensor arrangement 16, which is preferably configured as an optical sensor arrangement and, here and preferably, as a camera. By means of the sensor arrangement 16,images 17 of the respective packages 3 are recorded. Accordingly, the images 17 are preferably camera images, in particular two-dimensional or three-dimensional image information items of the respective package 3. The camera may be configured as a color camera, and in particular as a 3D camera. Further configurations of the sensor arrangement 16 are conceivable, for example with IR sensors or the like. The sensor arrangement 16 is here and preferably arranged on the feed arrangement 4, so that images 17 of the respective package 3 on the feed arrangement 4 are recorded, preferably in the case of a moved package 3. Further configurations of the sensor arrangement 16 which can record images 17 representative of the appearance of the packages 3, for example by means of laser scanning or the like, are also conceivable.” “weighing arrangement” in claim 21. The specification discloses to one of ordinary skill in the art that the corresponding structure to perform the recited function is a weight scale, disclosing that “As is shown in fig. 1, a weighing arrangement 18 by means of which weight values for the individual packages 3 are determined is furthermore provided as a functional unit here. The printing of the label by means of the printer arrangement 7 may furthermore be carried out here as a function of the weight values of the respective packages 3. For example, the weight value or values determinable therefrom with the aid of the package class, for example net weight, gross weight, tare and/or a weight range assigned to the weight, is/are printed. Preferably, the price information assigned to the package class contains a basic price which is used to calculate a weight-dependent package price, the respective label being printed by means of the printer arrangement 7 with the package price determined from a weight value and a basic price. The weighing arrangement 18 may be operated as a function of the respective package class. For example, the package class is assigned weighing parameters, for instance weight ranges and/or scale intervals, and the weighing arrangement 18 determines the weight values on the basis of the assigned weighing parameters..” Furthermore, Figure 1 shows that the weighing arrangement is located underneath the packages. Although the term “weight scale” is never used, a person of ordinary skill in the art would immediately appreciate that Figure 1 and the above cited specification are describing a conventional weight scale. “sorting arrangement” in claim 26. The specification discloses that the corresponding structure to perform the recited function is a compressed air pulse or sorting paths with switching points, disclosing that “The sorting may be rejection of individual packages 3, for instance removal from the feed arrangement 4, which is carried out for example via a compressed-air impulse. A multipath sorting arrangement may likewise be used, which distributes the packages 3 onto various sorting paths, for example via one or more switching points.” “aligning arrangement” in claim 36, 37, 38. The specification discloses that the corresponding structure to perform the recited function is a guide, which can be displaceable, disclosing that “According to the configuration which is represented in fig. 1 and is likewise preferred, an aligning arrangement 29 by means of which the individual packages 3 are positioned on the feed arrangement 4 is furthermore provided as a functional unit. The aligning arrangement 29 has at least one guide element 30, here guide elements 30 which are arranged on both sides of the feed arrangement 4 and are adjacent to and/or protrude into a conveyor region of the feed arrangement 4 at least in certain sections. The guide elements 30 in this case align the packages 3 via contact. Preferably, the guide elements 30 are displaceable in order to allow adjustment of the alignment.” See also Figure 1, showing that the aligning element is a guide element which is a guide or structural element that protrudes into the feed arrangement 4. “guide element” in claim 37. The specification discloses that the corresponding structure to perform the recited function is a guide, which can be displaceable, disclosing that “According to the configuration which is represented in fig. 1 and is likewise preferred, an aligning arrangement 29 by means of which the individual packages 3 are positioned on the feed arrangement 4 is furthermore provided as a functional unit. The aligning arrangement 29 has at least one guide element 30, here guide elements 30 which are arranged on both sides of the feed arrangement 4 and are adjacent to and/or protrude into a conveyor region of the feed arrangement 4 at least in certain sections. The guide elements 30 in this case align the packages 3 via contact. Preferably, the guide elements 30 are displaceable in order to allow adjustment of the alignment.” See also Figure 1, showing that the guide element is a guide or structural element that protrudes into the feed arrangement 4. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 40 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 40 recites the limitation "a trained machine learning model" in line 28 (the final line of the claim). There is insufficient antecedent basis for this limitation in the claim. Line 25 (three lines earlier) also recites “a trained machine learning model” and it is not clear if these are the same or different. The examiner suggests amending line 28 (the final line) to recite “the trained machine learning model”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 17-21, 23-25, 27-35 and 39-40 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rodgers (US 9802728 B1) in view of Fraunhofer (DE 102017107837 A1, cited in IDS filed 11/7/2023, new translation provided). As to claim 17, Rodgers discloses a method for operating a labeling system having at least one labeling apparatus for individually labeling packages, the labeling apparatus having at least a feed arrangement (such as conveyor 150, 250 or a pair of conveyors 350, 352), a label dispensing arrangement (“may include various label supply facilities such as rolls, trays or carriages which may provide blank stock to the label printer 276 for printing”, “one or more components for transferring a printed label to the rotatable tamp head”), a label affixing arrangement (tamp assembly 170, 270, 370 and tamp head 172, 272, 372) and a printer arrangement (“label printer 276 may be any form of analog or digital printer, e.g., a toner-based, inkjet, solid ink, inkless, dot-matrix or daisy wheel printer) as functional units, which are driven by a control arrangement (“controller 274”) of the labeling system in a labeling routine, wherein: in the labeling routine, packages are transported by means of the feed arrangement, labels are dispensed by means of the label dispensing arrangement, an individual one of the labels dispensed by means of the label dispensing arrangement is affixed onto a respective one of the packages by means of the label affixing arrangement and the individual one of the labels dispensed by means of the label dispensing arrangement is printed by means of the printer arrangement (see Figure 3, below); the labeling apparatus has a sensor arrangement (“an imaging device 366 (e.g., a digital camera) for capturing imaging information in the form of digital images and/or range data from objects passing along the conveyors 350, 352, such as the container 30.”) by means of which images of the packages are recorded, the images of the packages are analyzed by means of the control arrangement in an analysis routine (“For example, as the container 30 passes beneath the depth sensor 362, dimensional information such as lengths, widths or heights of the container 30 may be determined, and based at least in part on such information, areas of one or more surfaces of the container 30, as well as a volume of the container 30, may be estimated.”), a classification of the respective package into a package class is derived by means of the analysis routine and operation of the labeling apparatus in the labeling routine is carried out as a function of the classification (“Based at least in part on the dimensional information, the mass and/or the imaging information obtained using the depth sensor 362, the scale 364 and the imaging device 366, the container 30 may be identified, and an orientation or alignment of the container 30 may be determined.”). See column 2, lines 29-50, disclosing: The systems and methods disclosed herein may be implemented in order to apply a label, e.g. a shipping label, to an object, e.g., a container including one or more items therein, in a manner that increases the likelihood that the label will remain readable and intact during a delivery of the object to a destination. Some advantages of the present disclosure are shown with regard to FIGS. 1A, 1B, 1C and 1D. Referring to FIG. 1A, a system 100 including a conveyor 150, a laser sensor 160 and a tamp assembly 170 is shown. The conveyor 150 includes a container 10A traveling thereon. The container 10A includes a seam 12A and an adhesive tape 14A that seals the seam 12A, and the seam 12A and the tape 14A are aligned substantially coaxially with a direction of travel of the container 10A on the conveyor 150. The laser sensor 160 is configured to determine one or more attributes of the container 10A, e.g., a dimension such as a height, a width or a length of the container 10A, the seam 12A or the tape 14A, or an orientation or alignment of the container 10A, the seam 12A or the tape 14A, as the container 10A travels along the conveyor 150. The tamp assembly 170 includes a tamp head 172 for applying one or more labels to containers passing along the conveyor 150. See also column 9, line 25 to column 10, line 25: As is shown in FIG. 2, the fulfillment center 240 also includes the tamp assembly 270, which further includes a rotatable tamp head 272, a controller 274, a label printer 276 and a proximity sensor 278, which may operate independently or in conjunction with one another, at the instruction of the computer 242 or one or more other computers or computer devices. The rotatable tamp head 272 may be any component having a suitable surface for receiving a label to be affixed upon an object, contacting the object and disposing the label upon the object. Additionally, the controller 274 may be any form of actuator or motorized component for raising or lowering, or otherwise repositioning, e.g., rotating, an application surface of the rotatable tamp head 272 within a plane or about one or more axes. The controller 274 may cause the rotatable tamp head 272 to be raised, lowered or rotated using any form of prime mover, and may include one or more actuators which cause the rotatable tamp head 272 to press or roll a label into a surface of an object (e.g., a flat surface and/or cylinder), wrap a label around a corner or edge of an object, or “blow” or otherwise apply a label onto an object by way of fluid pressure in accordance with the present disclosure. Moreover, the rotatable tamp head 272 may further include any device or component for holding a label onto the rotatable tamp head 272 prior to applying the label to an object, including one or more vacuum or adhesive systems. (37) The rotatable tamp head 272 may take any shape or any size. For example, the rotatable tamp head 272 may have an application surface having a size consistent with a size of a label to be applied thereby, i.e., an area of approximately four inches by six inches (4″×6″) where the tamp assembly 270 is to be used to apply a four inch by six inch (4″×6″) label, or an object to which the label is to be applied. The rotatable tamp head 272 may also be chamfered, shaped or otherwise modified to correspond to any system requirements, however. Additionally, one or more rotatable tamp heads 272 may be releasably mounted within the tamp assembly 270 and interchangeably provided for use with labels or objects of various sizes, or in connection with various applications. (38) The label printer 276 may be any form of analog or digital printer, e.g., a toner-based, inkjet, solid ink, inkless, dot-matrix or daisy wheel printer, configured to impose one or more characters, symbols or other markings onto a label or other adhesive paper-type product. The label printer 276 may be configured to operate separately or in conjunction with the rotatable tamp head 272, and may include various label supply facilities such as rolls, trays or carriages which may provide blank stock to the label printer 276 for printing. The label printer 276 may be further connected to or otherwise associated with one or more computer devices, such as a direct or networked connection with the computer 242 or a networked connection with any other computer device outside the fulfillment center 240, and may receive one or more commands to print instructions or data onto a predetermined label of a selected type or form. Moreover, the label printer 276 may be used to print labels for various applications. For example, a label printer 276 may be configured to print shipping labels onto one or more containers of an outbound shipment that is being prepared for delivery from the fulfillment center 240 to a customer 230, while also printing labels onto one or more containers of an inbound shipment arriving at the fulfillment center 240 from the vendor 220. Additionally, the label printer 276 may further include one or more components for transferring a printed label to the rotatable tamp head 272 for application onto one or more objects. See column 12, line 61, disclosing: The system 300 shown in FIG. 3 includes a pair of conveyors 350, 352, a plurality of sensors 360 and a tamp assembly 370 provided for applying or affixing one or more labels to a container 30. The plurality of sensors 360 includes a depth sensor or range camera 362 (e.g., a laser sensor) oriented to capture dimensional information or any other attributes regarding the container 30, the seam 32 or any markings or other identifiers on the container 30 or the seam 32, as the container 30 passes thereunder, a scale 364 for determining a mass of the container 30 and an imaging device 366 (e.g., a digital camera) for capturing imaging information in the form of digital images and/or range data from objects passing along the conveyors 350, 352, such as the container 30. The tamp assembly of FIG. 3 includes a rotatable tamp head 372, a label printer 376 and a proximity sensor 378. See also Figure 3, below: PNG media_image1.png 612 932 media_image1.png Greyscale Rodgers does not disclose that the analysis routine is based on an application of a trained machine learning model to the images, which is carried out by means of the control arrangement. However, Fraunhofer discloses and makes obvious that the analysis routine is based on an application of a trained machine learning model to the images, which is carried out by means of the control arrangement. See the translation, disclosing: The sensor element preferably has an image sensor for acquiring image data as sensor data. For this purpose, in particular at least one camera is provided. Image data provides a very wide variety of possible features that may become relevant to a classifier. Therefore, the flexibility and versatility by training new functionality is particularly in demand. … The control and evaluation unit is preferably designed to transmit sensor data to the computer network which does not correspond to the sensor data to be additionally classified. Such sensor data represent negative examples in addition to the positive examples, such as background images. This allows the classifier to deliver more reliable results after completing the training. The classifier preferably has a neural network. This refers to an artificial neural network, in particular an artificial neural folding network (CNN). Such a classifier has the required flexibility and is therefore very well suited to be expanded by training in a desired manner. … The computer network according to the invention is the remote station for training a classifier for a sensor arrangement. As already explained above, the computer network can be a single computer in the smallest configuration, and it is preferably a combination of computers or processors, such as in a high-performance server rack, a company network or a cloud. In the computer network, an adaptable classifier is implemented, which preferably corresponds to a classifier of the sensor arrangement. The customizable classifier is trained based on sensor data of the sensor array to be classified, and then classifier data is transmitted to the sensor array. The sensor array receives the necessary information via the classifier data to modify its classifier according to the training gained by training. … The classifier is preferably a neural network. The remarks on a neural network of the sensor arrangement apply accordingly. The neural network is trained as positive examples on the basis of the sensor data obtained from the sensor arrangement, preferably enriched by negative examples also transmitted or stored in a separate database. Preferably, it is an implementation of a neural network as on the sensor, because then it is sufficient to play back the trained parameters such as weighting factors. However, it is also conceivable to transmit the neural network as a whole as a new, extended classifier to the sensor arrangement. The computer network has large, scalable computing capacities that are not affected by the limitations of a sensor. As a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement. The sensor data preferably have image data, wherein the arithmetic unit is configured to generate further image data for training from received image data, in which image aberrations are added, the texture of the background is changed and / or the recording perspective is altered mathematically. In general, but especially for the training of a neural network with many nodes or even layers ("Deep Learning") many training records are required. The operator of the sensor arrangement would like to and can usually provide only a few positive examples. Therefore, it makes sense to reprocess and duplicate the image data (augmentation). In particular, image data with different textures for the background are linked or linked with various image errors, which in turn correspond to typical on-site situations such as optical defects, contamination or damage to the recorded objects or symbols and the like. Also a mathematical change of the admission perspective leads to many variants, which are relevant for the practice. For this purpose, the image or the symbol to be learned therein is reduced in size, enlarged and rotated in a planar or perspective manner. An augmentation is also useful in the case of other sensor data as image data, in which case corresponding variations are used. … Overall, this results from a completely automated training process in the computer network 12 a new or extended classifier, such as a trained convolution network, with which in the sensor array 10 a desired symbol is additionally recognizable. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized that the analysis routine is based on an application of a trained machine learning model to the images, which is carried out by means of the control arrangement as suggested by Fraunhofer because as a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement. As to claim 18, Rodgers discloses wherein the sensor arrangement is a camera. See column 8, line 40 to column 9, line 24. The fulfillment center 240 may further include one or more item sensors 260, which may consist of or comprise one or more sensors, including but not limited to a scale or other device for determining a mass of one or more objects, a depth sensor or range camera for determining a distance to the one or more objects (e.g., a laser sensor); a digital camera or other imaging device for capturing still or moving images or multimedia regarding the one or more objects. For example, the scale may be aligned to weigh or otherwise determine a mass of the one or more objects passing along the conveyor 250. The depth sensor or range camera may be aligned to determine depth data or ranging data, e.g., a distance or depth to one or more faces or facets of an object. Some such devices may include infrared projectors for projecting infrared light onto one or more surfaces of an object and infrared sensors including arrays of pixel detectors for capturing digital imaging data regarding the wavelengths of the reflected light within different spectral bands, such as relatively lower frequency bands associated with infrared light, which may be projected upon an object in order to determine information regarding a distance to the object from which such light is reflected, or an orientation or configuration of the object. For example, the reflected light within the infrared bands may be processed in order to recognize a distance to the object, as well as one or more dimensions (e.g., heights, widths or lengths) of the object. Additionally, the digital camera or other imaging device may be a digital area-scan or line-scan camera configured to capture light reflected from objects, in order to calculate or assign one or more quantitative values of the reflected light, and to generate one or more outputs based on such values, or to store such values in one or more data stores. Such devices may detect reflected light within their respective fields of view, which may be defined by a function of a distance between a sensor and a lens within the camera (viz., a focal length), as well as a location of the camera and an angular orientation of the camera's lens. Additionally, where an object appears within a depth of field, or a distance within the field of view where the clarity and focus is sufficiently sharp, a digital camera may capture light that is reflected off objects of any kind to a sufficiently high degree of resolution using one or more sensors thereof, and store information regarding the reflected light in one or more data files. Moreover, those of ordinary skill in the pertinent arts would further recognize that one or more of such item sensors 260 may be combined into a single sensor. For example, one such device is an RGB-Z sensor, which may capture not only color-based imaging information regarding an object (e.g., colors of pixels in an image of the object, expressed according to the RGB color model) but also information regarding distances from the object (e.g., a depth or a range z to the object). As to claim 19, Rodgers discloses wherein the printing of the individual one of the labels by means of the printer arrangement is carried out as a function of the package class of the respective one of the packages. See column 9, line 55 disclosing that “For example, a label printer 276 may be configured to print shipping labels onto one or more containers of an outbound shipment that is being prepared for delivery from the fulfillment center 240 to a customer 230, while also printing labels onto one or more containers of an inbound shipment arriving at the fulfillment center 240 from the vendor 220.” As to claim 20, Rodgers discloses wherein the printing is carried out as a function of product information assigned to the package class. See column 9, line 55 disclosing that “For example, a label printer 276 may be configured to print shipping labels onto one or more containers of an outbound shipment that is being prepared for delivery from the fulfillment center 240 to a customer 230, while also printing labels onto one or more containers of an inbound shipment arriving at the fulfillment center 240 from the vendor 220.” As to claim 21, Rodgers discloses wherein a weighing arrangement (such as scale 364) by means of which individual weight values for the packages are determined (“a scale 364 for determining a mass of the container 30”) is furthermore provided as a functional unit, wherein the printing of the individual one of the labels by means of the printer arrangement is furthermore carried out as a function of a respective weight value of the respective one of the packages (“the scale may be aligned to weigh or otherwise determine a mass of the one or more objects passing along the conveyor 250.”), and wherein the individual one of the labels for the respective one of the packages is printed by means of the printer arrangement with a package price determined from the respective weight value and a basic price. See column 13, line 23, disclosing “Based at least in part on the dimensional information, the mass and/or the imaging information obtained using the depth sensor 362, the scale 364 and the imaging device 366, the container 30 may be identified, and an orientation or alignment of the container 30 may be determined.” Column 13, line 35 discloses that “Once the container 30 has been identified, and an orientation of the container 30 has been determined, an appropriate label to be applied upon a surface of the container may be printed using the label printer 376” As to claim 23, Rodgers discloses wherein the individual dispensed label is affixed onto the respective one of the packages by means of the label affixing arrangement according to an affixing task assigned to the package class. See column 13, line 40, disclosing: Next, the orientation of the rotatable tamp head 372 is compared to the orientation of the container 30. If the orientation of the rotatable tamp head 372 is sufficient for applying or affixing the printed label to the container 30 in a manner that will likely cause the information disposed upon the printed label to remain readable and intact during the fulfillment process, i.e., such that any characters or bar codes included on the printed label will likely remain legible regardless of whether the label is applied or affixed to the container 30 across the seam 32, then when the proximity sensor 378 senses the presence of the container 30 within a vicinity thereof, the application surface of the rotatable tamp head 372 may be pressed onto a surface of the container 30, thereby applying or affixing the printed label to the container 30. As to claim 24, Rodgers discloses wherein the individual dispensed label is affixed onto the respective one of the packages at an assigned affixing position. See column 13, line 40, disclosing: Next, the orientation of the rotatable tamp head 372 is compared to the orientation of the container 30. If the orientation of the rotatable tamp head 372 is sufficient for applying or affixing the printed label to the container 30 in a manner that will likely cause the information disposed upon the printed label to remain readable and intact during the fulfillment process, i.e., such that any characters or bar codes included on the printed label will likely remain legible regardless of whether the label is applied or affixed to the container 30 across the seam 32, then when the proximity sensor 378 senses the presence of the container 30 within a vicinity thereof, the application surface of the rotatable tamp head 372 may be pressed onto a surface of the container 30, thereby applying or affixing the printed label to the container 30. As to claim 25, Rodgers discloses wherein the respective one of the packages is transported by means of the feed arrangement according to a speed assigned to the package class. See column 20, line 15, disclosing “As is shown in FIG. 10A and FIG. 10B, the conveying unit 1050 may include belts or other drive components that are configured to cause the conveying unit 1050 and one or more objects provided thereon to travel at a speed of v.sub.CU.” See also column 22, line 51, disclosing “At box 1120, a rate of speed of the conveying unit is determined, and at box 1125, a distance to the intersect point from an edge of the conveying unit is determined. The speed of the conveying unit may be measured or estimated, e.g., according to a setting associated with one or more motors for driving the conveying unit.” As to claim 27, Rodgers does not disclose wherein the trained machine learning model is based on a trained neural network Fraunhofer as incorporated discloses wherein the trained machine learning model is based on a trained neural network. See the translation, disclosing: The sensor element preferably has an image sensor for acquiring image data as sensor data. For this purpose, in particular at least one camera is provided. Image data provides a very wide variety of possible features that may become relevant to a classifier. Therefore, the flexibility and versatility by training new functionality is particularly in demand. … The control and evaluation unit is preferably designed to transmit sensor data to the computer network which does not correspond to the sensor data to be additionally classified. Such sensor data represent negative examples in addition to the positive examples, such as background images. This allows the classifier to deliver more reliable results after completing the training. The classifier preferably has a neural network. This refers to an artificial neural network, in particular an artificial neural folding network (CNN). Such a classifier has the required flexibility and is therefore very well suited to be expanded by training in a desired manner. … The computer network according to the invention is the remote station for training a classifier for a sensor arrangement. As already explained above, the computer network can be a single computer in the smallest configuration, and it is preferably a combination of computers or processors, such as in a high-performance server rack, a company network or a cloud. In the computer network, an adaptable classifier is implemented, which preferably corresponds to a classifier of the sensor arrangement. The customizable classifier is trained based on sensor data of the sensor array to be classified, and then classifier data is transmitted to the sensor array. The sensor array receives the necessary information via the classifier data to modify its classifier according to the training gained by training. … The classifier is preferably a neural network. The remarks on a neural network of the sensor arrangement apply accordingly. The neural network is trained as positive examples on the basis of the sensor data obtained from the sensor arrangement, preferably enriched by negative examples also transmitted or stored in a separate database. Preferably, it is an implementation of a neural network as on the sensor, because then it is sufficient to play back the trained parameters such as weighting factors. However, it is also conceivable to transmit the neural network as a whole as a new, extended classifier to the sensor arrangement. The computer network has large, scalable computing capacities that are not affected by the limitations of a sensor. As a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement. The sensor data preferably have image data, wherein the arithmetic unit is configured to generate further image data for training from received image data, in which image aberrations are added, the texture of the background is changed and / or the recording perspective is altered mathematically. In general, but especially for the training of a neural network with many nodes or even layers ("Deep Learning") many training records are required. The operator of the sensor arrangement would like to and can usually provide only a few positive examples. Therefore, it makes sense to reprocess and duplicate the image data (augmentation). In particular, image data with different textures for the background are linked or linked with various image errors, which in turn correspond to typical on-site situations such as optical defects, contamination or damage to the recorded objects or symbols and the like. Also a mathematical change of the admission perspective leads to many variants, which are relevant for the practice. For this purpose, the image or the symbol to be learned therein is reduced in size, enlarged and rotated in a planar or perspective manner. An augmentation is also useful in the case of other sensor data as image data, in which case corresponding variations are used. … Overall, this results from a completely automated training process in the computer network 12 a new or extended classifier, such as a trained convolution network, with which in the sensor array 10 a desired symbol is additionally recognizable. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the trained machine learning model is based on a trained neural network as suggested by Fraunhofer because as a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement As to claim 28, Rodgers does not disclose wherein the neural network is a convolutional neural network. However, Fraunhofer discloses wherein the neural network is a convolutional neural network. See the citations above in claim 27, which discloses artificial neural folding network (CNN) and also refer to a trained convolution network, which directly reads on and is synonymous with the term of a convolutional neural network. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the neural network is a convolutional neural network as suggested by Fraunhofer because as a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement As to claim 29, Rodgers does not disclose wherein in the analysis routine, by applying the trained machine learning model, a feature extractor is applied directly or indirectly to the image of the respective one of the packages in order to generate a feature space, and wherein in a classification step of the analysis routine, the respective one of the packages is classified into the package class at least in part on the basis of the feature space. However, Fraunhofer makes obvious wherein in the analysis routine, by applying the trained machine learning model, a feature extractor is applied directly or indirectly to the image (“the selection of the sample images with the desired symbol”) of the respective one of the packages in order to generate a feature space (such as by cropping images), and wherein in a classification step of the analysis routine, the respective one of the packages is classified into the package class at least in part on the basis of the feature space. See the translation, explaining Figure 2, and disclosing: 2 shows an exemplary flow for training a new classification function in the sensor array 10 through the computer network 12 , In this case, steps are shown on the left, which are in the sensor arrangement 10 take place, right those in the computer network 12 and in the middle the data communication in between. The flow shows an embodiment in which several steps are optional, in particular annotation, augmentation and validation. It is again pointed out that the explanation using the example of an image-based sensor array 10 and a symbol to be taught, but the invention is not limited thereto. In addition, it is often spoken of only one einzulernenden symbol, although the sensor array 10 usually distinguish numerous symbols and can also learn several new symbols in one step. The process of teaching a new functionality may be by the sensor assembly 10 be triggered. For this purpose, at least one example image is recorded with the symbol to be trained in a step S1, wherein for a simpler and more robust training it is preferably a plurality of exemplary images. 3 illustrates a selection of dangerous goods labels as examples of such symbols. For a fixed selection of such symbols, the sensor arrangement 10 Alternatively, be set up from the outset with a classifier that recognizes and differentiates them. In the field, however, the requirement often arises afterwards to recognize another symbol, or a new symbol is created, or a symbol becomes more important, as in the past with lithium batteries. The user does not necessarily have to record the sample images in the course of expanding the symbol recognition, but can resort to images previously stored by the sensor arrangement. The sample images preferably have a high quality, so are clean as possible, undistorted and without reflections from orthogonal perspective and recorded with sufficient resolution. If the symbols in practice have different appearances, colors and backgrounds, it is helpful to representatively cover this with the sample images. In a step S2 follows an annotation of the example images. This is a manual preparation for the subsequent training, which also identifies the features to be recognized. A minimal implicit annotation is already made here by the selection of the sample images with the desired symbol, and this may be sufficient in particularly easy-to-use embodiments. Preferably, as in 4 illustrated by a dashed line, the actual symbol marked, which is to be recognized later. The sample images are automatically cropped from the annotation in the same way. In addition, a threshold can be set automatically or with user assistance to separate the icon into foreground and background colors. This separation facilitates later data augmentation in step S4 to be discussed below. The subsequent training leads to better results, in particular a significantly reduced false positive rate, although background examples are available, so images without the symbols to be learned. Preferably, therefore, in the sensor arrangement 10 also recorded and annotated such negative examples. Alternatively, in a database of the computer network 12 already stored image data, which is used when the sensor arra In a step S3, the example images with the symbols and possible background images as the basis for training in the computer network 12 transfer. Among the sample images is the annotation, a label that denotes the symbol. This can be a free designation, also any designation generated by the system, but recognizable in order to make the new symbol distinguishable. An intelligent name that describes the actual symbol has the advantage of being in the computer network 12 already example pictures of others Sensor arrangements are located, which further improve the training. The computer network 12 can also send back a query with similar terms and example images to query whether the new symbol to be learned matches a symbol in the computer network 12 already known. To make such suggestions, the computer network can 12 also apply a separate classifier to the sample images. Further data that can be stored with the example images are the originating system, ie an identification of the sensor arrangement 10 and timestamp for recording the sample images or transferring them to the computer network 12 , The transmission can be done manually, for example, via a web interface and also affect currently recorded data as previously collected offline. In addition, a direct transmission through the sensor array 10 conceivable. In a step S4 of the augmentation, the training data is expanded. The sample images are received and in the computer network 12 stored in a database, where now some tailored images of the symbol to be learned as positive examples and preferably negative examples in the form of arbitrary recordings of other objects without symbols or at least without the symbol to be learned are collected. However, the number of examples from real data, especially the positive examples, will regularly be very small, since their extraction is cumbersome and time-consuming. In addition, even damaged, faded or dirty symbols should be recognized. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein in the analysis routine, by applying the trained machine learning model, a feature extractor is applied directly or indirectly to the image of the respective one of the packages in order to generate a feature space, and wherein in a classification step of the analysis routine, the respective one of the packages is classified into the package class at least in part on the basis of the feature space by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 30, Rodgers does not disclose wherein the classification step of the analysis routine is performed by applying the trained machine learning model. However, Fraunhofer makes obvious wherein the classification step of the analysis routine is performed by applying the trained machine learning model. See the citations in claim 29 above. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the classification step of the analysis routine is performed by applying the trained machine learning model by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 31, Rodgers does not disclose wherein, in the analysis routine, in a proposal step, proposed regions which potentially contain subsections of the respective one of the packages are identified in the image, and wherein in a classification step the proposed regions are analyzed for the classification. However, Fraunhofer makes obvious wherein, in the analysis routine, in a proposal step, proposed regions which potentially contain subsections of the respective one of the packages are identified in the image (for example, by cropping in some way as cited in the citations in claim 29 above, or by artificial augmentation as cited below), and wherein in a classification step the proposed regions are analyzed for the classification. See the citations in claim 29 above. See also later in the translation, which further recites: Therefore, in order to obtain a classifier which detects various types of alienated symbols, artificial augmentation of the training data set, which preferably covers a very wide range of alienations, occurs, whereby only a selection of the possibilities described below can be implemented. 5 illustrates the augmentation using the example of some of the original, alienated symbols (left) and some alienations (augmentation) obtained in the course of the augmentation. By modeling a camera environment, the symbols are transformed into different perspectives. They are flat, but also rotated in perspective and scaled in different sizes according to a recording distance. By means of a texture database, various backgrounds can be added, such as cardboard, stickers and the like. The image is also randomly overlaid with various foreground textures that simulate alienation, especially in the logistics sector, such as soiling, damage to a package, markings or lighting variations. Thus, from the original symbols of the example images, a large, diverse set of training data is created in a very short time, which maps a desired variation of the symbols to be recognized and enables a successful training of a robust classifier. The augmented training data is preferably for later reuse even after training with a reference to the sensor array 10 and the original sample images stored in a database. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein, in the analysis routine, in a proposal step, proposed regions which potentially contain subsections of the respective one of the packages are identified in the image, and wherein in a classification step the proposed regions are analyzed for the classification by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 32, Rodgers does not disclose wherein the proposal step is performed by applying the trained machine learning model. However, Fraunhofer makes obvious wherein the proposal step is performed by applying the trained machine learning model (for example, by cropping in some way as cited in the citations in claim 29 above, or by artificial augmentation as cited below). See the citations in claim 29 above. See also later in the translation, which further recites: Therefore, in order to obtain a classifier which detects various types of alienated symbols, artificial augmentation of the training data set, which preferably covers a very wide range of alienations, occurs, whereby only a selection of the possibilities described below can be implemented. 5 illustrates the augmentation using the example of some of the original, alienated symbols (left) and some alienations (augmentation) obtained in the course of the augmentation. By modeling a camera environment, the symbols are transformed into different perspectives. They are flat, but also rotated in perspective and scaled in different sizes according to a recording distance. By means of a texture database, various backgrounds can be added, such as cardboard, stickers and the like. The image is also randomly overlaid with various foreground textures that simulate alienation, especially in the logistics sector, such as soiling, damage to a package, markings or lighting variations. Thus, from the original symbols of the example images, a large, diverse set of training data is created in a very short time, which maps a desired variation of the symbols to be recognized and enables a successful training of a robust classifier. The augmented training data is preferably for later reuse even after training with a reference to the sensor array 10 and the original sample images stored in a database. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the proposal step is performed by applying the trained machine learning model by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 33, Rodgers does not disclose wherein, in a learning routine, the machine learning model is trained on a training data set by means of the control arrangement. However, Fraunhofer makes obvious wherein, in a learning routine, the machine learning model is trained on a training data set by means of the control arrangement. See the citations in claim 29 above. See also the translation, “In the simplest case, the user merely designates data sets of sensor data which are to be classified in the future.” Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein, in a learning routine, the machine learning model is trained on a training data set by means of the control arrangement by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 34, Rodgers does not disclose wherein the training data set is derived at least partially from images recorded by means of the sensor arrangement in a previous and/or ongoing labeling routine in which a classification of the respective packages is predetermined. However, Fraunhofer makes obvious wherein the training data set is derived at least partially from images recorded by means of the sensor arrangement in a previous and/or ongoing labeling routine in which a classification of the respective packages is predetermined. See the citations in claim 29 above. See also the translation, “In the simplest case, the user merely designates data sets of sensor data which are to be classified in the future.” The translation also discloses “For this purpose, the sensor arrangement transmits, in an expansion function, selected sensor data, which is additionally to be processed by the classifier, to the computer network. There, a classifier is trained based on the sensor data, and upon completion of the training, the sensor array receives classifier data to modify the classifier, such as parameters, program sections, or even the entire trained classifier. Thus, the sensor arrangement is also equipped for the classification of the selected sensor data.” Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the training data set is derived at least partially from images recorded by means of the sensor arrangement in a previous and/or ongoing labeling routine in which a classification of the respective packages is predetermined by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 35, Rodgers does not disclose wherein, in the labeling routine employed for the training data set, packages of the same package class are labeled at least during certain periods of time. However, Fraunhofer makes obvious wherein, in the labeling routine employed for the training data set, packages of the same package class are labeled at least during certain periods of time. See the citations in claim 29 above. See also the translation, “In the simplest case, the user merely designates data sets of sensor data which are to be classified in the future.” The translation also discloses “For this purpose, the sensor arrangement transmits, in an expansion function, selected sensor data, which is additionally to be processed by the classifier, to the computer network. There, a classifier is trained based on the sensor data, and upon completion of the training, the sensor array receives classifier data to modify the classifier, such as parameters, program sections, or even the entire trained classifier. Thus, the sensor arrangement is also equipped for the classification of the selected sensor data.” Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein, in the labeling routine employed for the training data set, packages of the same package class are labeled at least during certain periods of time by using the neural network training process of Fraunhofer with the packages of the Rodgers type in order to achieve the benefits of “training [that] leads to better results, in particular a significantly reduced false positive rate”. As to claim 39 Rodgers does not disclose wherein the control arrangement is configured as part of the labeling apparatus and/or in a cloud-based manner. However, Fraunhofer discloses wherein the control arrangement is configured as part of the labeling apparatus and/or in a cloud-based manner. See the Fraunhofer translation, disclosing “Some examples of the computer network are a corporate network, be it the operator of the sensor array or the manufacturer, system integrator or the like, especially in the form of a private or public cloud.” See also later in the Fraunhofer translation, disclosing “As already explained above, the computer network can be a single computer in the smallest configuration, and it is preferably a combination of computers or processors, such as in a high-performance server rack, a company network or a cloud.” Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the control arrangement is configured as part of the labeling apparatus and/or in a cloud-based manner as disclosed by Fraunhofer in order to achieve cloud based high performance computing power and cost savings. As to claim 40, Rodgers discloses labeling system having at least one labeling apparatus for individually labeling packages, the labeling apparatus comprising: as functional units a feed arrangement (such as conveyor 150, 250 or a pair of conveyors 350, 352), a label dispensing arrangement (“may include various label supply facilities such as rolls, trays or carriages which may provide blank stock to the label printer 276 for printing”, “one or more components for transferring a printed label to the rotatable tamp head”), a label affixing arrangement (tamp assembly 170, 270, 370 and tamp head 172, 272, 372) and, and a printer arrangement (“label printer 276 may be any form of analog or digital printer, e.g., a toner-based, inkjet, solid ink, inkless, dot-matrix or daisy wheel printer); and a control arrangement (“controller 274”) of the labeling system configured to drive the functional units in a labeling routine; wherein the feed arrangement is configured to transport the packages in the labeling routine (“the container 10A travels along the conveyor 150.”), wherein the label dispensing arrangement is configured to dispense labels in the labeling routine (see step 740, “At box 740, the label may be transferred to a tamp head.”), wherein the label affixing arrangement is configured to dispense an individual one of the labels onto a respective one of the packages (see step 790, “the process advances to box 790, where the printed label is applied to the container using the tamp head”) wherein the printer arrangement is configured to print the individual one of the labels (see step 730, “at box 730, an appropriate label for the container is printed”, and), wherein the labeling apparatus has a sensor arrangement, which is configured to record images of the packages (“an imaging device 366 (e.g., a digital camera) for capturing imaging information in the form of digital images and/or range data from objects passing along the conveyors”), wherein the control arrangement is configured to analyze the images of the packages in an analysis routine (“For example, as the container 30 passes beneath the depth sensor 362, dimensional information such as lengths, widths or heights of the container 30 may be determined, and based at least in part on such information, areas of one or more surfaces of the container 30, as well as a volume of the container 30, may be estimated.”), to derive a classification of the respective one of the packages into a package class by means of the analysis routine and to carrying out the driving of the labeling apparatus in the labeling routine as a function of the classification (“Based at least in part on the dimensional information, the mass and/or the imaging information obtained using the depth sensor 362, the scale 364 and the imaging device 366, the container 30 may be identified, and an orientation or alignment of the container 30 may be determined.”). See column 2, lines 29-50, disclosing: The systems and methods disclosed herein may be implemented in order to apply a label, e.g. a shipping label, to an object, e.g., a container including one or more items therein, in a manner that increases the likelihood that the label will remain readable and intact during a delivery of the object to a destination. Some advantages of the present disclosure are shown with regard to FIGS. 1A, 1B, 1C and 1D. Referring to FIG. 1A, a system 100 including a conveyor 150, a laser sensor 160 and a tamp assembly 170 is shown. The conveyor 150 includes a container 10A traveling thereon. The container 10A includes a seam 12A and an adhesive tape 14A that seals the seam 12A, and the seam 12A and the tape 14A are aligned substantially coaxially with a direction of travel of the container 10A on the conveyor 150. The laser sensor 160 is configured to determine one or more attributes of the container 10A, e.g., a dimension such as a height, a width or a length of the container 10A, the seam 12A or the tape 14A, or an orientation or alignment of the container 10A, the seam 12A or the tape 14A, as the container 10A travels along the conveyor 150. The tamp assembly 170 includes a tamp head 172 for applying one or more labels to containers passing along the conveyor 150. See also column 9, line 25 to column 10, line 25: As is shown in FIG. 2, the fulfillment center 240 also includes the tamp assembly 270, which further includes a rotatable tamp head 272, a controller 274, a label printer 276 and a proximity sensor 278, which may operate independently or in conjunction with one another, at the instruction of the computer 242 or one or more other computers or computer devices. The rotatable tamp head 272 may be any component having a suitable surface for receiving a label to be affixed upon an object, contacting the object and disposing the label upon the object. Additionally, the controller 274 may be any form of actuator or motorized component for raising or lowering, or otherwise repositioning, e.g., rotating, an application surface of the rotatable tamp head 272 within a plane or about one or more axes. The controller 274 may cause the rotatable tamp head 272 to be raised, lowered or rotated using any form of prime mover, and may include one or more actuators which cause the rotatable tamp head 272 to press or roll a label into a surface of an object (e.g., a flat surface and/or cylinder), wrap a label around a corner or edge of an object, or “blow” or otherwise apply a label onto an object by way of fluid pressure in accordance with the present disclosure. Moreover, the rotatable tamp head 272 may further include any device or component for holding a label onto the rotatable tamp head 272 prior to applying the label to an object, including one or more vacuum or adhesive systems. The rotatable tamp head 272 may take any shape or any size. For example, the rotatable tamp head 272 may have an application surface having a size consistent with a size of a label to be applied thereby, i.e., an area of approximately four inches by six inches (4″×6″) where the tamp assembly 270 is to be used to apply a four inch by six inch (4″×6″) label, or an object to which the label is to be applied. The rotatable tamp head 272 may also be chamfered, shaped or otherwise modified to correspond to any system requirements, however. Additionally, one or more rotatable tamp heads 272 may be releasably mounted within the tamp assembly 270 and interchangeably provided for use with labels or objects of various sizes, or in connection with various applications. The label printer 276 may be any form of analog or digital printer, e.g., a toner-based, inkjet, solid ink, inkless, dot-matrix or daisy wheel printer, configured to impose one or more characters, symbols or other markings onto a label or other adhesive paper-type product. The label printer 276 may be configured to operate separately or in conjunction with the rotatable tamp head 272, and may include various label supply facilities such as rolls, trays or carriages which may provide blank stock to the label printer 276 for printing. The label printer 276 may be further connected to or otherwise associated with one or more computer devices, such as a direct or networked connection with the computer 242 or a networked connection with any other computer device outside the fulfillment center 240, and may receive one or more commands to print instructions or data onto a predetermined label of a selected type or form. Moreover, the label printer 276 may be used to print labels for various applications. For example, a label printer 276 may be configured to print shipping labels onto one or more containers of an outbound shipment that is being prepared for delivery from the fulfillment center 240 to a customer 230, while also printing labels onto one or more containers of an inbound shipment arriving at the fulfillment center 240 from the vendor 220. Additionally, the label printer 276 may further include one or more components for transferring a printed label to the rotatable tamp head 272 for application onto one or more objects. See column 12, line 61, disclosing: The system 300 shown in FIG. 3 includes a pair of conveyors 350, 352, a plurality of sensors 360 and a tamp assembly 370 provided for applying or affixing one or more labels to a container 30. The plurality of sensors 360 includes a depth sensor or range camera 362 (e.g., a laser sensor) oriented to capture dimensional information or any other attributes regarding the container 30, the seam 32 or any markings or other identifiers on the container 30 or the seam 32, as the container 30 passes thereunder, a scale 364 for determining a mass of the container 30 and an imaging device 366 (e.g., a digital camera) for capturing imaging information in the form of digital images and/or range data from objects passing along the conveyors 350, 352, such as the container 30. The tamp assembly of FIG. 3 includes a rotatable tamp head 372, a label printer 376 and a proximity sensor 378. See also Figure 3, below: PNG media_image1.png 612 932 media_image1.png Greyscale Rodgers does not disclose wherein a trained machine learning model is stored in the control arrangement, wherein the control arrangement is configured to carry out the analysis routine based on an application of a trained machine learning model to the images. However, Fraunhofer discloses and makes obvious wherein a trained machine learning model is stored in the control arrangement, wherein the control arrangement is configured to carry out the analysis routine based on an application of a trained machine learning model to the images. See the translation, disclosing: The sensor element preferably has an image sensor for acquiring image data as sensor data. For this purpose, in particular at least one camera is provided. Image data provides a very wide variety of possible features that may become relevant to a classifier. Therefore, the flexibility and versatility by training new functionality is particularly in demand. … The control and evaluation unit is preferably designed to transmit sensor data to the computer network which does not correspond to the sensor data to be additionally classified. Such sensor data represent negative examples in addition to the positive examples, such as background images. This allows the classifier to deliver more reliable results after completing the training. The classifier preferably has a neural network. This refers to an artificial neural network, in particular an artificial neural folding network (CNN). Such a classifier has the required flexibility and is therefore very well suited to be expanded by training in a desired manner. … The computer network according to the invention is the remote station for training a classifier for a sensor arrangement. As already explained above, the computer network can be a single computer in the smallest configuration, and it is preferably a combination of computers or processors, such as in a high-performance server rack, a company network or a cloud. In the computer network, an adaptable classifier is implemented, which preferably corresponds to a classifier of the sensor arrangement. The customizable classifier is trained based on sensor data of the sensor array to be classified, and then classifier data is transmitted to the sensor array. The sensor array receives the necessary information via the classifier data to modify its classifier according to the training gained by training. … The classifier is preferably a neural network. The remarks on a neural network of the sensor arrangement apply accordingly. The neural network is trained as positive examples on the basis of the sensor data obtained from the sensor arrangement, preferably enriched by negative examples also transmitted or stored in a separate database. Preferably, it is an implementation of a neural network as on the sensor, because then it is sufficient to play back the trained parameters such as weighting factors. However, it is also conceivable to transmit the neural network as a whole as a new, extended classifier to the sensor arrangement. The computer network has large, scalable computing capacities that are not affected by the limitations of a sensor. As a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement. The sensor data preferably have image data, wherein the arithmetic unit is configured to generate further image data for training from received image data, in which image aberrations are added, the texture of the background is changed and / or the recording perspective is altered mathematically. In general, but especially for the training of a neural network with many nodes or even layers ("Deep Learning") many training records are required. The operator of the sensor arrangement would like to and can usually provide only a few positive examples. Therefore, it makes sense to reprocess and duplicate the image data (augmentation). In particular, image data with different textures for the background are linked or linked with various image errors, which in turn correspond to typical on-site situations such as optical defects, contamination or damage to the recorded objects or symbols and the like. Also a mathematical change of the admission perspective leads to many variants, which are relevant for the practice. For this purpose, the image or the symbol to be learned therein is reduced in size, enlarged and rotated in a planar or perspective manner. An augmentation is also useful in the case of other sensor data as image data, in which case corresponding variations are used. … Overall, this results from a completely automated training process in the computer network 12 a new or extended classifier, such as a trained convolution network, with which in the sensor array 10 a desired symbol is additionally recognizable. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein a trained machine learning model is stored in the control arrangement, wherein the control arrangement is configured to carry out the analysis routine based on an application of a trained machine learning model to the images as suggested by Fraunhofer because as a result, even a complex neural network can be trained in shorter times and in a higher quality than would be possible within the sensor arrangement. Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rodgers (US 9802728 B1) in view of Fraunhofer (DE 102017107837 A1, cited in IDS filed 11/7/2023) as applied to claims 17-21, 23-25, 27-35 and 39-40 above, and further in view of Wolff (WO 2019120646 A1). As to claim 22, Rodgers does not disclose wherein the label dispensing arrangement is equipped with a plurality of material strips for dispensing various label types, and wherein the individual one of the labels is dispensed for the respective one of the packages by means of the label dispensing arrangement according to the label type assigned to the package class. However, Wolff discloses wherein the label dispensing arrangement is equipped with a plurality of material strips for dispensing various label types, and wherein the individual one of the labels is dispensed for the respective one of the packages by means of the label dispensing arrangement according to the label type assigned to the package class. See the translation, disclosing “The label feeders 3, which can be activated independently of one another and can also provide different labels here, are therefore assigned to the same transport device 4, the same printing device 11 and the same recording device 12 in the exemplary embodiment described here.”. See also especially claim 10, which recites that “the control device (8) automatically adjusts the printing process as soon as a label (1) passes the detection area (14), which has a different width, length, position and / or orientation than a predetermined setpoint for the width, length, position and / or orientation or a different width, length, position and / or orientation than the immediately preceding label (1), preferably that the adaptation of the printing process within the period of detection the contour takes place until the beginning of the assigned printing process” Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the label dispensing arrangement is equipped with a plurality of material strips for dispensing various label types, and wherein the individual one of the labels is dispensed for the respective one of the packages by means of the label dispensing arrangement according to the label type assigned to the package class as taught by Wolff in order to achieve adaptation of the printing process within the period of detection the contour takes place until the beginning of the assigned printing process. Claim(s) 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rodgers (US 9802728 B1) in view of Fraunhofer (DE 102017107837 A1, cited in IDS filed 11/7/2023) as applied to claims 17-21, 23-25, 27-35 and 39-40 above, and further in view of Simpson (US 20210280091 A1) As to claim 26, Rodgers and Fraunhofer does not disclose wherein a sorting arrangement is furthermore provided as a functional unit, and wherein the sorting arrangement sorts the packages on the feed arrangement as a function of the classification. However, Simpson discloses and makes obvious wherein a sorting arrangement is furthermore provided as a functional unit, and wherein the sorting arrangement sorts the packages on the feed arrangement as a function of the classification. Simpson discloses in the abstract “a method and a system for building machine learning or deep learning data sets for automatically recognizing labels on items”. Paragraph 0048 teaches that “the described technology may have application in a variety of manufacturing, assembly, distribution, or sorting applications which include processing images including personal or sensitive information at high rates of speed and volume.” Paragraph 0050 discloses that “items, such as articles of mail (e.g., letters, flats, parcels, and the like), warehouse inventories, or packages are frequently received into a processing facility in bulk, and must be sorted into particular groups to facilitate further processes such as, for example, delivery of the item to a specified destination. Sorting items or articles can be done using imaging technologies.” Paragraph 0051 teaches that “A processing facility can use automated processing equipment to sort items. An item processing facility may receive a very high volume of items, such as letters, flats, parcels, or other objects which must be sorted and/or sequenced for delivery. Sorting and/or sequencing may be accomplished using item processing equipment which can scan, read, or otherwise interpret a destination end point from each item processed.” Paragraph 0051 also teaches that “In some embodiments, the processing facility uses sorting/sequencing apparatuses which can process over about 30,000 items per hour.” Paragraph 0052 teaches that “By imaging the items and using machine learning to interpret the images reduces or eliminates the need for manual sorting, manual identification of items, etc. The machine learning can identify labels on items and provide alerts or cause sorting or processing equipment to process the item in accordance with directions based on the identified label.” Paragraphs 0053 onward discloses the various labels and images which are trained or taught, teaching that “The DOT hazardous labels may be designed in compliance with nine hazardous classes including hazardous classes 1-9 covering explosive, flammable, oxidizer, poison, radioactive, corrosive, other dangerous materials or products, etc.”. For example, paragraph 0054 teaches training for DOT class 2 hazardous labels. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein a sorting arrangement is furthermore provided as a functional unit, and wherein the sorting arrangement sorts the packages on the feed arrangement as a function of the classification as suggested by Simpson in order to reduce or eliminates the need for manual sorting, manual identification of items and achieve high processing and safety throughput by recognizing dangerous packages. Claim(s) 36-38 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rodgers (US 9802728 B1) in view of Fraunhofer (DE 102017107837 A1, cited in IDS filed 11/7/2023) as applied to claims 17-21, 23-25, 27-35 and 39-40 above, and further in view of Ehrmann (US 20150158617 A1). As to claim 36, Rodgers does not disclose wherein an aligning arrangement is provided as a functional unit, and wherein by means of the aligning arrangement the packages are individually positioned on the feed arrangement. However, Ehrmann discloses wherein an aligning arrangement (guide rails 3) is provided as a functional unit, and wherein by means of the aligning arrangement the packages are individually positioned on the feed arrangement. See especially paragraph 0024, disclosing: [0024] FIG. 1 shows a line converger 1 according to one embodiment of the present invention with a label dispenser 2 which may be arranged at the end of the line converger 1 when seen in a conveying direction T. The line converger 1 may also include two movable guide rails 3 that may be converging towards one another and that can be adjusted such that a gap or opening for packages 4 is defined at the downstream end of the guide rails 3. The width of the gap or opening may be only slightly larger than the width of the package 4. The gap can be formed at an arbitrary position transverse to the conveying direction T. In FIG. 1, the gap is shown in the central area transverse to the conveying direction above and on a conveyor belt 5. The guide rails 3 can be provided with additional guides 6 on a downstream end thereof to align the package 4 along the conveying direction T. A label 7 can then be positioned precisely on the package 4 based on its transverse position with respect to the conveying direction T. See also Figure 2, which shows an overhead view of the guide rails 3. PNG media_image2.png 464 778 media_image2.png Greyscale Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein an aligning arrangement is provided as a functional unit, and wherein by means of the aligning arrangement the packages are individually positioned on the feed arrangement as taught by Ehrmann in order to guide the packages to the labelling operation. As to claim 37, Rodgers does not disclose wherein the aligning arrangement has at least one guide element which is adjacent to and/or protrudes into a conveyor region of the feed arrangement at least in certain sections, and wherein the packages are individually positioned on the feed arrangement at least in part by the at least one guide element. However, Ehrmann as cited above in claim 36 also discloses and makes obvious wherein the aligning arrangement has at least one guide element which is adjacent to and/or protrudes (see especially marked up Figure 2) into a conveyor region of the feed arrangement at least in certain sections, and wherein the packages are individually positioned on the feed arrangement at least in part by the at least one guide element. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the aligning arrangement has at least one guide element which is adjacent to and/or protrudes into a conveyor region of the feed arrangement at least in certain sections, and wherein the packages are individually positioned on the feed arrangement at least in part by the at least one guide element as taught by Ehrmann in order to guide the packages to the labelling operation. As to claim 38, Rodgers does not disclose wherein the sensor arrangement is provided at the aligning arrangement and/or downstream of the aligning arrangement on the feed arrangement, wherein the sensor arrangement has a predefined distance from the packages, and wherein the predefined distance corresponds to the distance of the sensor arrangement from the packages in the labeling routine employed for the training data set. However, Ehrmann as cited above in claim 36 also discloses and makes obvious wherein the sensor arrangement is provided at the aligning arrangement and/or downstream of the aligning arrangement on the feed arrangement (in Figure 2, sensor 8 is downstream, as shown by arrow T), wherein the sensor arrangement has a predefined distance from the packages. Additionally, Fraunhofer would make obvious wherein the predefined distance corresponds to the distance of the sensor arrangement from the packages in the labeling routine employed for the training data set. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the filing of the invention to have utilized wherein the sensor arrangement is provided at the aligning arrangement and/or downstream of the aligning arrangement on the feed arrangement, wherein the sensor arrangement has a predefined distance from the packages, and wherein the predefined distance corresponds to the distance of the sensor arrangement from the packages in the labeling routine employed for the training data set as taught by Ehrmann and Fraunhofer in order to guide the packages to the labelling operation. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GEORGE R KOCH whose telephone number is (571) 272-5807. The examiner can also be reached by E-mail at george.koch@uspto.gov if the applicant grants written authorization for e-mails. Authorization can be granted by filling out the USPTO Automated Interview Request (AIR) Form. The examiner can normally be reached M-F 10-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PHILIP C TUCKER can be reached at (571)272-1095. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GEORGE R KOCH/Primary Examiner, Art Unit 1745 GRK
Read full office action

Prosecution Timeline

Nov 07, 2023
Application Filed
Jan 08, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12577065
CONVEYING APPARATUS AND PEELING APPARATUS
2y 5m to grant Granted Mar 17, 2026
Patent 12577016
POLE PIECE LABELING CONTROL METHOD AND DEVICE, ELECTRONIC EQUIPMENT, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12568788
SEMICONDUCTOR PACKAGE MANUFACTURING APPARATUS AND SEMICONDUCTOR PACKAGE MANUFACTURING METHOD USING THE SAME
2y 5m to grant Granted Mar 03, 2026
Patent 12568587
CONNECTION METHOD FOR YARN WIRE AND CIRCUIT BOARD
2y 5m to grant Granted Mar 03, 2026
Patent 12545456
Splice mechanism for a packaging assembly
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
90%
With Interview (+17.6%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 1075 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month