Prosecution Insights
Last updated: April 19, 2026
Application No. 18/285,907

DEVICE AND METHOD FOR CHECKING A MARKING OF A PRODUCT

Non-Final OA §102§103
Filed
Oct 06, 2023
Examiner
GARCIA, PAULO ANDRES
Art Unit
2669
Tech Center
2600 — Communications
Assignee
Rea Elektronik GmbH
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
34 granted / 41 resolved
+20.9% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
13 currently pending
Career history
54
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
54.3%
+14.3% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 41 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice to Applicants 2. This communication is in response to the application filled on 10/06/2023. 3. Claims 16-30 are pending. 4. Limitations appearing inside {} are intended to indicate the limitations not taught by said prior art(s)/combinations. Information Disclosure Statement 5. The information disclosure statement(s) (IDS) submitted on 10/06/2023 have been considered by the examiner. Specification 6. The abstract of the disclosure is objected to because legal phraseology and exceeds 150 words. Specifically, ln. 10 recites “…determined by means of the comparison”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). 7. Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. Claim Rejections - 35 USC § 102 8. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 9. Claims 16, 18-20, and 25-30 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by U.S. Publication No. 2021/0157998 to Rodriguez et al. (hereinafter Rodriguez). 10. Regarding Claim 16, Rodriguez discloses a method for checking a marking of a product, comprising ([par. 0100, ln. 1-15] “…the present technology concerns a method for identifying items, e.g., by a supermarket checkout system… involves moving an item to be purchased along a path, such as by a conveyor. A first camera arrangement captures first 2D image data depicting the item when the item is at a first position along the path. Second 2D image data is captured… at a second position along the path. A programmed computer, or other device, processes the captured image data—in conjunction with geometrical information about the path and the camera—to discern 3D spatial orientation information for a first patch on the item. By reference to this 3D spatial orientation information, the system determines object-identifying information from the camera's depiction of at least the first patch.”, [par. 0102, ln. 1-9] “The object-identifying information can be a machine-readable identifier, such as a barcode or a steganographic digital watermark, either of which can convey a plural-bit payload. This information can… comprise text—recognized by an optical character recognition engine… the product can be identified by other markings, such as by image fingerprint information that is matched to reference fingerprint information in a product database.”): acquiring, in an image acquisition step (3), at least one image (4) of the marking arranged on a surface of the product by a calibrated test camera ([Fig. 1A-B], [Fig. 27-29], [par. 0100, ln. 1-15], [par. 0102, ln. 1-9], [par. 0198, ln. 1-5] “Another approach simply characterizes the perspective distortion of the camera across its field of view, in a calibration operation—before use. This information is stored, and later recalled to correct imagery captured during use of the system.”, [par. 0199, ln. 1-11] “One calibration technique places a known reference pattern (e.g., a substrate marked with a one-inch grid pattern) on the conveyor. This scene is photographed by the camera, and the resulting image is analyzed to discern the perspective distortion at each 2D location across the camera's field of view (e.g., for each pixel in the camera's sensor). The operation can be repeated, with the calibrated reference pattern positioned at successively elevated heights above the plane of the conveyor (e.g., at increments of one inch). Again, the resulting imagery is analyzed, and the results stored for later use.”); checking, in a layout checking step (1), using the at least one image (4), a quality of the marking applied to the surface with respect to static data including at least one of position and machine-readability ([par. 0293, ln. 1-18] “Application of the above procedure to the 3D arrangement of FIG. 31 results in a segmented 3D model, such as is represented by FIG. 33. Each object is represented by data stored in memory indicating, e.g., its shape, size, orientation, and position. An object's shape can be indicated by data indicating whether the object is a cylinder, a rectangular hexahedron, etc. The object's size measurements depend on the shape…Orientation can be defined—for a cylinder—by the orientation of its principal axis (in the three-dimensional coordinate system in which the model is defined). For a regular hexahedron, orientation can be defined by the orientation of its longest axis. The position of the object can be identified by the location of an object keypoint. For a cylinder, the keypoint can be the center of the circular face that is nearest the origin of the coordinate system. For a hexahedron, the keypoint can be the corner of the object closest to the origin.”, [par. 0312, ln. 1-10] “The position of a barcode (or other marking) on an object is additional evidence—even if the captured imagery does not permit such indicia to identify the object with certainty. For example, if a hexahedral shape is found to have has a barcode indicia on the smallest of three differently-sized faces, then candidate products that do not have their barcodes on their smallest face can be ruled out—effectively pruning the universe of candidate products, and increasing the confidence scores for products that have barcodes on their smallest faces.”); acquiring, using the at least one image (4) recorded using the test camera, in a code checking step (2) variable product information (12) contained in the marking ([par. 0310, ln. 1-14] “A great variety of other information can be used in this manner Consider, for example, that the image of FIG. 31 may reveal identification markings on the cylindrical face of Object 3 exposed in that view. Such markings may comprise, for example, a barcode, or distinctive markings that comprise a visual fingerprint (e.g., using robust local features). A barcode database may thereby unambiguously identify the exposed cylindrical shape as a 10.5 oz. can of Campbell's Condensed Mushroom Soup. A database of product information—which may be the barcode database or another (located at a server in the supermarket or at a remote server)—is consulted with such identification information, and reveals that the dimensions of this Campbell's soup can are 3″ in diameter and 4″ tall.”); and determining, by the comparing, a correspondence characteristic ([par. 0536, ln. 1, to par. 0546, ln. 12] “The first keypoint descriptor from the input image is compared against each of the million or so reference keypoint descriptors in the FIG. 45 data structure. For each comparison, a Euclidean distance is computed, gauging the similarity between the subject keypoint, and a keypoint in the reference data. One of the million reference descriptors will thereby be found to be closest to the input descriptor. If the Euclidean distance is below a threshold value (“A”), then the input keypoint descriptor is regarded as matching a reference keypoint. A vote is thereby cast for the product associated with that reference keypoint, e.g., Kellogg's Rice Crispies cereal… This descriptor matching process is repeated for the second keypoint descriptor determined for the input image… then another vote for a product is cast... As a result, hundreds of votes will be cast. (Many hundred more descriptors may not be close enough, i.e., within threshold “A,” of a reference descriptor to merit a vote.) The final tally may show 208 votes for Kellogg's Rice Crispies cereal, 33 votes for Kellogg's Raisin Bran cereal, 21 votes for Kellogg's Nutri-Grain Snack bars, and lesser votes for many other products… A second threshold test is then applied… the cast votes are examined to determine if a reference product received votes exceeding a second threshold (e.g., 20%) of the total possible votes (e.g., 200, if the input image yielded 1000 keypoint descriptors)… this second threshold of 200 was exceeded by the 208 votes cast for Kellogg's Rice Crispies cereal. If this second threshold is exceeded by votes for one product, and only one product, then the input image is regarded to have matched that product. In the example case, the input image is thus identified as depicting a package of Kellogg's Crispies cereal, with a UPC code of 038000291210… In FIG. 48, the top quarter of various reference images are shown. FIG. 49 shows graphical elements that are found to be in common. Once graphical elements that are common between a threshold number (e.g., 2, 4, 10, 30, etc.) of reference images are found, they can be deduced to be logos. A robust feature identification procedure is then applied to the “logo,” and keypoint descriptors are calculated. The reference data is then searched for reference keypoint descriptors that match, within the Euclidean distance threshold “A,” these logo descriptors. Those that match are flagged, in the data structure, with information such as is shown in the right-most column FIG. 47.”), wherein the marking is successfully checked if the correspondence characteristic exceeds a predefinable success threshold ([par. 0024, ln. 1-6] “…a confidence score is computed that indicates the certainty of an identification hypothesis about an item. This hypothesis is tested against collected evidence, until the confidence score exceeds a threshold (or until the process concludes with an ambiguous determination).”, [par. 0536, ln. 1, to par. 0546, ln. 12], [par. 0610, ln. 1-10] “One way to recover a message value from a watermarked signal is to perform correlation between the known message property of each message symbol and the watermarked signal. If the amount of correlation exceeds a threshold, for example, then the watermarked signal may be assumed to contain the message symbol. The same process can be repeated for different symbols at various locations to extract a message.”). 11. Regarding Claim 18, Rodriguez discloses the method of claim 16. Rodriguez discloses wherein in the code checking step (2) the product information (12), encrypted in at least one one-dimensional or multi-dimensional and machine-readable code and/or security element ([Fig. 54-55], [par. 0599, ln. 1-10] “FIG. 54 is a block diagram summarizing signal processing operations involved in embedding and reading a watermark. There are three primary inputs to the embedding process: the original, digitized signal 100, the message 102, and a series of control parameters 104. The control parameters may include one or more keys. One key or set of keys may be used to encrypt the message. Another key or set of keys may be used to control the generation of a watermark carrier signal or a mapping of information bits in the message to positions in a watermark information signal.”, [par. 0602, ln. 1-12] “The watermark embedding process 106 converts the message to a watermark information signal. It then combines this signal with the input signal and possibly another signal (e.g., an orientation pattern) to create a watermarked signal 108. The process of combining the watermark with the input signal may be a linear or non-linear function. Examples of watermarking functions include: S*=S+gX; S*=S(1+gX); and S*=S e.sup.gX; where S* is the watermarked signal vector, S is the input signal vector, and g is a function controlling watermark intensity. The watermark may be applied by modulating signal samples S in the spatial, temporal or some other transform domain.”), is acquired and is compared by a comparison of the variable product information with reference product information from the reference information database (13) ([par. 0102, ln. 1-9], [par. 0310, ln. 1-14]). 12. Regarding Claim 19, Rodriguez discloses the method of claim 16. Rodriguez further discloses in the image acquisition step (3), UV light, and/or visible light, and/or infrared light is directed by a light source onto the marking and is reflected thereby, is acquired by the test camera, and processed in a data processing facility ([par. 0132, ln. 1-8] “One particular implementation illuminates the items with a repeating sequence of three colors: white, infrared, and ultraviolet. Each color is suited for different purposes. For example, the white light can capture an overt product identification symbology; the ultraviolet light can excite anti-counterfeiting markings on genuine products; and the infrared light can be used to sense markings associated with couponing and other marketing initiatives.”, [par. 0310, ln. 1-14] see “…A database of product information… located at a server in the supermarket or at a remote server… is consulted with such identification information…”). 13. Regarding Claim 20, Rodriguez discloses the method of claim 16. Rodriguez further discloses wherein in the image acquisition step (3), the at least one image recorded by the test camera is stored in a storage device of the data processing facility ([Fig. 47], [par. 0310, ln. 1-14]). 14. Regarding Claim 25, Rodriguez discloses the method of claim 16. Rodriguez further discloses wherein based on the correspondence characteristic determined in the layout checking step (1), in an event of a value of the correspondence being exceeded ([par. 0024, ln. 1-6], [par. 0536, ln. 1, to par. 0546, ln. 12], [par. 0610, ln. 1-10]), at least one product information region is cut out of the actual layout ([par. 0606, ln. 1-11] “The watermark detector 110a operates on a digitized signal suspected of containing a watermark. As depicted generally in FIG. 54, the suspect signal may undergo various transformations 112a, such as conversion to and from an analog domain, cropping, copying, editing, compression/decompression, transmission etc. Using parameters 114 from the embedder (e.g., orientation pattern, control bits, key(s)), it performs a series of correlation or other operations on the captured image to detect the presence of a watermark. If it finds a watermark, it determines its orientation within the suspect signal.”, [par. 0618, ln. 1-16] “Consider an example where the watermark is defined in a transform domain (e.g., a frequency domain such as DCT, wavelet or DFT). The embedder segments the image in the spatial domain into rectangular tiles and transforms the image samples in each tile into the transform domain. For example in the DCT domain, the embedder segments the image into N by N blocks and transforms each block into an N by N block of DCT coefficients. In this example, the assignment map specifies the corresponding sample location or locations in the frequency domain of the tile that correspond to a bit position in the raw bits. In the frequency domain, the carrier signal looks like a noise pattern. Each image sample in the frequency domain of the carrier signal is used together with a selected raw bit value to compute the value of the image sample at the location in the watermark information signal.”, [par. 0619, ln. 1-12] “Now consider an example where the watermark is defined in the spatial domain. The embedder segments the image in the spatial domain into rectangular tiles of image samples (i.e. pixels). In this example, the assignment map specifies the corresponding sample location or locations in the tile that correspond to each bit position in the raw bits. In the spatial domain, the carrier signal looks like a noise pattern extending throughout the tile. Each image sample in the spatial domain of the carrier signal is used together with a selected raw bit value to compute the value of the image sample at the same location in the watermark information signal.”), wherein in the code checking step, the variable product information of the at least one product information region is checked ([par. 0102, ln. 1-9], [par. 0310, ln. 1-14], [par. 0618, ln. 1-16], [par. 0619, ln. 1-12]). 15. Regarding Claim 26, the claim language is directly analogous to claim 1 with the exception of “A device…” and “…wherein the device comprises a digital data processing facility which is designed such that…”. Rodriguez further discloses a device and a data processing facility which is designated such that it performs the functions of the method of Rodriguez ([Fig. 73] see computer 1220, camera scanner 1243, remote computer 1249, [par. 0100, ln. 1-15] see “…A programmed computer…”, [par. 0310, ln. 1-14], [par. 0755, ln. 1-10] “The computer 1220 operates in a networked environment using logical connections to one or more remote computers, such as a remote computer 1249. The remote computer 1249 may be a server, a router, a peer device or other common network node, and typically includes many or all of the elements described relative to the computer 1220, although only a memory storage device 1250 has been illustrated in FIG. 73…”). 16. Regarding Claim 27, Rodriguez discloses the device of claim 26. Rodriguez further discloses wherein the data processing facility comprises a storage device that is connected to the test camera, wherein, in the image recording step (3), images recorded by the test camera can be stored in the storage device ([Fig. 73] see computer 1220, camera scanner 1243, remote computer 1249, [par. 0100, ln. 1-15] see “…A programmed computer…”, [par. 0310, ln. 1-14], [par. 0755, ln. 1-10]). 17. Regarding Claim 28, Rodriguez discloses the device of claim 27. Rodriguez further discloses wherein the data processing facility comprises a database, wherein the database is connected for signal transmission to the storage device, such that images (4) stored in the storage device can be matched with images (7) of the database ([Fig. 73] see computer 1220, camera scanner 1243, remote computer 1249, [par. 0100, ln. 1-15] see “…A programmed computer…”, [par. 0310, ln. 1-14], [par. 0755, ln. 1-10]). 18. Regarding Claim 29, Rodriguez discloses the device of claim 26. Rodriguez further discloses wherein the test camera comprises at least one light source, by means of which the marking to be checked can be illuminated ([par. 0132, ln. 1-8], [par. 0388, ln. 1-4] “The unit may also include an illumination source (e.g., a visible, IR, or UV LED) which is activated during a period of image capture (e.g., a thirtieth of a second, every 5 minutes) to assure adequate illumination.”). 19. Regarding Claim 30, Rodriguez discloses the device of claim 26. Rodriguez further discloses wherein the test camera is designed to detect ultraviolet light and/or visible light and/or infrared light ([par. 0132, ln. 1-8], [par. 0363, ln. 1-9] “…the Kinect sensor does not rely on feature extraction or feature tracking. Instead, it employs a structured light scanner (a form of range camera) that works by sensing the apparent distortion of a known pattern projected into an unknown 3D environment by an infrared laser projector, and imaged by a monochrome CCD sensor. From the apparent distortion, the distance to each point in the sensor's field of view is discerned.”, [par. 0365, ln. 1-10] “In Kinect-related embodiments of the present technology, the sensor typically is not moved. Its 6DOF information is fixed. Instead, the items on the checkout conveyor move. Their motion is typically in a single dimension (along the axis of the conveyor), simplifying the volumetric modeling. As different surfaces become visible to the sensor (as the conveyor moves), the model is updated to incorporate the newly-visible surfaces. The speed of the conveyor can be determined by a physical sensor, and corresponding data can be provided to the modeling system.”). Claim Rejections - 35 USC § 103 20. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 21. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 22. Claims 17 and 21-24 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Publication No. 2021/0157998 to Rodriguez et al. (hereinafter Rodriguez), and further in view of U.S. Publication No. 2021/0374926 to Plant et al. (hereinafter Plant). 23. Regarding Claim 17, Rodriguez discloses wherein a {language of the} marking is identified in the code checking step (2) by a comparison of the variable product information (12) with language information from the reference information database (13) ([par. 0169, ln. 1-15] “…a segmentation module identifies and extracts the portion of the camera imagery depicting the shaded surface of item 90. (Known 2D segmentation can be used here.) This image excerpt is passed to a text detector module that identifies at least one prominent alphabetic character. (Known OCR techniques can be used.) More particularly, such module identifies a prominent marking in the image excerpt as being a text character, and then determines its orientation, using various rules. (E.g., for capital letters B, D, E, F, etc., the rules may indicate that the longest straight line points up-down; “up” can be discerned by further, letter-specific, rules. The module applies other rules for other letters.) The text detector module then outputs data indicating the orientation of the analyzed symbol.”, [par. 0310, ln. 1-14], [Fig. 47-50], see Auxiliary Info, Logo Print, in database of Fig. 47 which includes OCR identifiable markings, as shown in Figs. 48-50). Specifically, Rodriguez discloses identifying at least the text of a label using OCR and language information, and one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that OCR typically includes identifying the language of the text, though it is not specifically disclosed in Rodriguez to identify a language. However, Plant specifically discloses wherein the labels can be in multiple languages ([par. 0068, ln. 1-2] “Multi-language string variables have their own particular representation in the variables pane.”, [par. 0105, ln. 1-20] “The printed label in its simplest sense might be a product code expressed in alphanumeric text. More usually there will be a mixture of texts, such as branding, use instructions, batch numbers, product numbers, certification marks, manufacture dates, use-by dates, weight values, and so on. The text may be present in the same of different typefaces, font sizes and in different orientations (e.g. horizontal or vertical). Text may be in different languages or may be in non-Latin text, such as Chinese and Japanese symbols or Arabic letters.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Rodriguez and Plant as within the same field of product labeling, and as analogous to the claimed invention. The motivation to combine would have been obvious to one of ordinary skill in the art, in that by further identifying a language using OCR you could expand the applicability and accuracy of the method to more products and/or locations (e.g., foreign goods/stores where the language is not Latin based). Specifically, given that OCR typically includes identifying a language, one of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the method and OCR of Rodriguez to identify multiple languages as disclosed in Plant through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method and OCR of Rodriguez to identify multiple languages as disclosed in Plant to obtain the invention as specified in claim 17. 24. Regarding Claim 21, Rodriguez discloses the method of claim 20. Rodriguez discloses wherein {in a reference database generation step (6), a target layout comprising a desired static data of the marking is generated}, wherein the target layout is stored in the reference information database ([par. 0102, ln. 1-9], [par. 0310, ln. 1-14]), wherein the static data of the target layout are compared with the static data of an actual layout ([par. 0310, ln. 1-14], [par. 0310, ln. 14-27] “In this case, the model segmentation depicted in FIG. 33 is known to be wrong. The cylinder is not 8″ tall. The model is revised as depicted in FIG. 36. The certainty score of Object 3 is increased to 100, and a new, wholly concealed Object 6 is introduced into the model. Object 6 is assigned a certainty score of 0—flagging it for further investigation. (Although depicted in FIG. 36 as filling a rectangular volume below Object 3 that is presumptively not occupied by other shapes, Object 6 can be assigned different shapes in the model.) For example, Objects 1, 2, 3, 4 and 5 can be removed from the volumetric model, leaving a remaining volume model for the space occupied by Object 6 (which may comprise multiple objects or, in some instances, no object).”, [par. 0312, ln. 1-10] “The position of a barcode (or other marking) on an object is additional evidence—even if the captured imagery does not permit such indicia to identify the object with certainty. For example, if a hexahedral shape is found to have has a barcode indicia on the smallest of three differently-sized faces, then candidate products that do not have their barcodes on their smallest face can be ruled out—effectively pruning the universe of candidate products, and increasing the confidence scores for products that have barcodes on their smallest faces.”, [par. 0313, ln. 1-5] “Similarly, the aspect ratio (length-to-height ratio) of barcodes varies among products. This information, too, can be sensed from imagery and used in pruning the universe of candidate matches, and adjusting confidence scores accordingly.”, [par. 0316, ln. 1-15] “…different segmented shapes can be refined by reference to other sensor data, consider weight data. Where the weight of the pile can be determined (e.g., by a conveyor or cart weigh scale), this weight can be analyzed and modeled in terms of component weights from individual objects—using reference weight data for such objects retrieved from a database. When the weight of the identified objects is subtracted from the weight of the pile, the weight of the unidentified object(s) in the pile is what remains. This data can again be used in the evidence-based determination of which objects are in the pile. (For example, if one pound of weight in the pile is unaccounted for, items weighing more than one pound can be excluded from further consideration.)”), and wherein a correspondence characteristic is determined by the comparison ([par. 0310, ln. 1-27] see “The certainty score of Object 3 is increased to 100” following size comparisons, [par. 0312, ln. 1-10], [par. 0316, ln. 1-15]). Rodriguez does not specifically disclose wherein in a reference database generation step (6), a target layout comprising a desired static data of the marking is generated. However, Plant teaches a reference database generation step, a target layout comprising a descried static data of the marking is generated ([par. 0025, ln. 1-9] “…the reference image is an e-image constructed from information which is used to instruct the printer when printing the label. By using the image sent to the printer to construct (or same process to construct) the reference e-image the degree to which variation between expected printed image and obtained scanned image can vary is greatly limited. However, alternatively, the reference image may be obtained as an imported image of an exemplar compliant label.”). One of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize Rodriguez and Plant as within the same field of product labeling, and as analogous to the claimed invention. The motivation to combine would have been obvious to one of ordinary skill in the art, and is disclosed in Plant, wherein by constructing the reference database to comprise a target layout comprising a desired static data of the marking you prevent variation between the actual and target layout from increasing unnecessarily. One of ordinary skill in the art, before the effective filling date of the claimed invention, would have combined the method of Rodriguez with the reference database generation of Plant through known means, with no change to their respective function, and the combination would have yielded nothing more than predicable results. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Rodriguez with the reference database generation of Plant to obtain the invention as specified in claim 21. 25. Regarding Claim 22, a combination of Rodriguez and Plant teaches the method of claim 21. Rejections analogous to claim 21 are further applicable to claim 22. Specifically, Rodriguez discloses in the image recording step (3), the actual layout is generated from the static data from the at least one image, wherein the actual layout (5) is stored in the storage device ([par. 0293, ln. 1-18], [par. 0312, ln. 1-10]), and wherein the static data of the actual layout are compared with the static data of the target layout, wherein a correspondence characteristic is determined by the comparison ([par. 0310, ln. 1-27], [par. 0312, ln. 1-10], [par. 0316, ln. 1-15]). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that the image acquired by Rodriguez would be the actual layout to be compared to the reference data comprising the target layout as generated in the reference database generation of Plant. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Rodriguez with the reference database generation of Plant to obtain the invention as specified in claim 22. 26. Regarding Claim 23, a combination of Rodriguez and Plant teaches the method of claim 21. Rodriguez further discloses wherein the target layout is a generated edge model (8) of the desired marking, wherein the target layout is stored in the reference information database as an edge model (8) ([par. 0180, ln. 1-8] “Another implementation functions without regard to the presence of text in the imagery. Referring to FIG. 16, the system passes the segmented region to an edge finding module, which identifies the longest straight edge 98 in the excerpt… The angle of this line serves as a clue to the orientation of any watermark.”, [par. 0290, ln. 1 to par. 0292, ln.15] “Geometrical rules are applied to identify faces that form part of the same object. For example, as shown in FIG. 31A, if edges A and B are parallel, and terminate at opposite end vertices (I, II) of an edge C—at which vertices parallel edges D and E also terminate, then the region between edges A and B is assumed to be a surface face that forms part of the same object as the region (surface face) between edges D and E…. some rules take precedence over others. Consider edge F in FIG. 32. Normal application of the just-stated rule would indicate that edge F extends all the way to the reference plane. However, a contrary clue is provided by parallel edge G that bounds the same object face (H). Edge G does not extend all the way to the reference plane; it terminates at the top plane of “Object N.” This indicates that edge F similarly does not extend all the way to the reference plane, but instead terminates at the top plane of “Object N.” This rule may be stated as: parallel edges originating from end vertices of an edge (“twin edges”) are assumed to have the same length. That is, if the full length of one edge is known, a partially-occluded twin edge is deduced to have the same length.”, [par. 0293, ln. 1-18], [par. 0310, ln. 1-27]). Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Rodriguez with the reference database generation of Plant to obtain the invention as specified in claim 23. 27. Regarding Claim 24, a combination of Rodriguez and Plant teaches the method of claim 21. Rejections analogous to claim 22 and 23 are further applicable to claim 24. Specifically, Rodriguez discloses wherein the actual layout is an edge model (5), wherein the edge model (5) is generated from the at least one image recorded in the image recording step ([par. 0180, ln. 1-8], [par. 0290, ln. 1 to par. 0292, ln.15], [par. 0293, ln. 1-18], [par. 0310, ln. 1-27]), and wherein the actual layout is stored in the storage device as an edge model (5) for comparison with the target layout ([par. 0293, ln. 1-18], [par. 0310, ln. 1-27]). Specifically, one of ordinary skill in the art, before the effective filling date of the claimed invention, would recognize that the image acquired by Rodriguez would be the actual layout to be compared to the reference data comprising the target layout as generated in the reference database generation of Plant. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filling date of the claimed invention, to combine the method of Rodriguez with the reference database generation of Plant to obtain the invention as specified in claim 24. Conclusion 28. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See PTO-892. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PAULO ANDRES GARCIA whose telephone number is (703)756-5493. The examiner can normally be reached Mon-Fri, 8-4:30PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chan Park can be reached on (571)272-7409. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PAULO ANDRES GARCIA/Examiner, Art Unit 2669 /CHAN S PARK/Supervisory Patent Examiner, Art Unit 2669
Read full office action

Prosecution Timeline

Oct 06, 2023
Application Filed
Dec 09, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602823
RE-LOCALIZATION OF ROBOT
2y 5m to grant Granted Apr 14, 2026
Patent 12597280
IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12597161
SYSTEMS AND METHODS FOR OBJECT TRACKING AND LOCATION PREDICTION
2y 5m to grant Granted Apr 07, 2026
Patent 12586400
IMAGE PROCESSING APPARATUS, CONTROL METHOD THEREOF, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12586176
SYSTEMS AND METHODS FOR PREDICTING AN INCOMING ROTATIONAL BALANCE OF AN UNFINISHED WORKPIECE
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+17.2%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 41 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month