DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-24 are pending.
Claim Rejections - 35 USC § 103
The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action:
(a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made.
Claim(s) 1-2, 4-6, 9-17 and 21-24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marrion et al (US20150036876) in view of Mishra et al (US20210233269).
Regarding claims 1, 12 and 22, Marrion teaches a method for assigning a symbol to an object in an image, the method comprising:
(Marrion, Figs. 1A-1C, "associating codes with objects.", [0004]; "The present technology provides machine vision systems and machine vision processes for reading codes on objects and associating the codes with objects.", [0024]; associating a code with an object; under the Broadest Reasonable Interpretation (BRI), a "code" (such as a barcode or 2D code) reads on the claimed "symbol")
receiving the image captured by an imaging device, the symbol located within the image;
(Marrion, "receive from the area-scan camera an image of at least a portion of one or more objects in the first workspace", [0011]; "determine an image location of a code in the image", [0009]; receiving an image captured by an imaging device (an area-scan camera) and identifying the code (symbol) that is located within the captured image)
receiving, in a three-dimensional (3D) coordinate space, a 3D location of one or more points that corresponds to pose information indicative of a 3D pose of the object in the image;
(Marrion, Fig. 2A; "receive from the dimensioner dimensioning data associated with the one or more objects in the second workspace", [0005]; "dimensioner 120 can generate dimensioning data (e.g., a point cloud or heights of points on objects 140 above conveyor belt 107, along with pose information) for objects 140.", [0030]; "In some embodiments, dimensioners can produce dimensioning data (or object pose data) including but not limited to the 3D pose of cuboids, the 3D pose of surfaces of objects, and/or a 3D point cloud representation of an object and or its surfaces.", [0030]; Fig. 4; "DimensionerAcquiredData3D coordinate space 435 can be the internal three-dimensional coordinate space of a dimensioner (e.g., dimensioner 120), ... The point cloud can include a plurality of points, with each point's coordinates expressed in DimensionerAcquiredData3D coordinate space 435”, [0054]; receiving dimensioning data that includes 3D pose information and a 3D point cloud representing the object, wherein the 3D locations of these points are expressed in a 3D coordinate space)
determining a two-dimensional (2D) location of the one or more points by mapping the 3D location of the one or more points of the object in the 3D coordinate space to a 2D location within the image in a 2D image coordinate space; and
(Marrion, "For the inverse mapping, three dimensional points in ReaderCamera3D coordinate space 420 are mapped to two dimensional points in ReaderCamera2D coordinate space 415.", [0050]; on the other hand, Mishra teaches: "projecting the 3D boundary features onto a 2D space of the image, using the pose, to obtain 2D boundary features", [0008]; "The 3D boundary features are projected onto a 2D space, which is typically the same as the 2D coordinate system of the captured image", [0061]; while Marrion's primary embodiment discusses back-projecting a 2D image location into a 3D "ray" (Marrion, “determine an image location of a code in the image; determine a ray in a shared coordinate space that is a back-projection of the image location of the code; receive from the dimensioner dimensioning data associated with the one or more objects in the second workspace; determine one or more surfaces of the one or more objects based on the dimensioning data, coordinates of the one or more surfaces expressed in the shared coordinate space” [0005]), this particular limitation requires the reverse: mapping 3D points to a 2D image location; Marrion explicitly teaches the mathematical capability to perform this required 3D-to-2D mapping via an "inverse mapping" where "three dimensional points... are mapped to two dimensional points" in the camera's 2D coordinate space (Marrion, [0050]); Mishra explicitly teaches a method of projecting 3D features of an object onto a 2D space of an image to evaluate spatial relationships (Mishra, [0008], [0061]))
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine Marrion and Mishra by utilizing Marrion's built-in 3D-to-2D inverse mapping mechanism to map the 3D object points to a 2D location within the 2D image coordinate space, as motivated by Mishra's teaching of projecting 3D features into the 2D image space for spatial evaluation. The combination of Marrion and Marrion also teaches other enhanced capabilities.
The combination of Marrion and Marrion further teaches:
assigning the symbol to the object based on a relationship between a 2D location of the symbol in the image in the 2D image coordinate space and the 2D location of the one or more points of the object in the image in the 2D image coordinate space.
(Marrion, "associate the code with the object”, [0005]; Mishra, “projecting the 3D boundary features onto a 2D space of the image, using the pose, to obtain 2D boundary features”, [0008]; Marrion teaches assigning (associating) the code to the object based on evaluating the spatial relationship between the code and the object (which Marrion primarily achieves in 3D using the back-projected ray). Mishra teaches projecting 3D object features onto the 2D image space to evaluate spatial alignments directly in 2D. It would have been obvious to a person of ordinary skill in the art to modify the comparison method of Marrion to instead forward-project the 3D points of the object into the 2D image coordinate space (as taught by Marrion, [0050] and Mishra, [0008]) and determine the relationship directly between the 2D location of the code and the mapped 2D location of the object points in the 2D image coordinate space, in order to associate the code with the object. This substitution of a 3D-to-2D comparison for a 2D-to-3D comparison represents a predictable use of known coordinate transformation techniques to achieve the identical result of determining whether the code is physically located on the object)
Regarding claims 2 and 17, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1, further comprising:
determining a surface of the object based on the 2D location of the one or more points of the object within the image in the 2D image coordinate space; and
assigning the symbol to the surface of the object based on a relationship between the 2D location of the symbol in the image and the surface of the object.
(Marrion, "determine one or more surfaces of the one or more objects based on the dimensioning data, coordinates of the one or more surfaces expressed in the shared coordinate space; determine a first surface of the one or more surfaces that intersects the 3D ray; identify an object of the one or more objects that is associated with the first surface; and associate the code with the object", [0005]; “determine an image location of a code in the image”, [0009]; determining surfaces in 2D/3D spaces from dimensioning data and associating the code to the object via the intersecting surface, which aligns with determining and assigning to a surface based on 2D locations and relationships; the code’s image location is used to form a back-projected ray and uses 3D dimensioning data to determine object surfaces)
Regarding claim 4, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1, further comprising determining an edge of the object in the image based on imaging data of the image.
(Marrion, see comments on claim 2; “When a first portion of the calibration target (e.g., a leading edge) is acquired (e.g., measured) by the dimensioner, the dimensioner can record the encoder count as EncoderCountStart”, [0066]; Mishra, "This type of approach determines edges of an object using RGB sensor data.", [0002]; "generating a gradient map from the image, for each of the 2D boundary features, estimating an edge score for an area on the gradient map", [0008]; determining edges and/or edge scores of an object directly using the imaging data generated from the image)
Regarding claims 5 and 24, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1, further comprising determining a confidence score for the symbol assignment.
(Marrion, “At step 280, the machine vision system selects a first surface of the one or more candidate surfaces based on one or more surface selection criteria …the selection criteria can be probability and/or confidence based”, [0045])
Regarding claims 6 and 14, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1, wherein the 3D location of one or more points is received from (aa) 3D sensor.
(Marrion, Fig. 1; “dimensioners can produce dimensioning data (or object pose data) including but not limited to the 3D pose of cuboids, the 3D pose of surfaces of objects, and/or a 3D point cloud representation of an object and or its surfaces”, [0030])
Regarding claim 9, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1,
wherein the 3D location of the one or more points is acquired at a first time and the image is acquired at a second time, and
wherein the mapping of the 3D location of the one or more points in the 3D coordinate space to the 2D location within the image in the 2D coordinate space comprises mapping the 3D location of the one or more points from the first time to the second time.
(Marrion, Fig. 1; 3D dimensions are measured first by dimensioner 120 (second workspace); then the objects are imaged next by camera 115 (first workspace); the process of mapping the 3D data to the 2D image space goes to dimensioner 120 first and then to the camera 115 next; "When a first portion of the calibration target (e.g., a leading edge) is acquired (e.g., measured) by the dimensioner, the dimensioner can record the encoder count as EncoderCountStart. When a last portion of the calibration target is acquired by the dimensioner, the dimensioner can record the encoder count as EncoderCountEnd.", [0066]; "When the reader camera reads a code identifying one of the surfaces of the calibration target, the reader camera can record the encoder count as EncoderCountCodeRead and the reader camera can store the acquired image.", [0067]; "MotionVectorPerEncCountConveyor3D 445 can be defined to describe the motion of the conveyor belt with respect to the encoder count (e.g., representing the encoder pulse count distance).", [0069]; "FrontWhenDimensioned3DFromFrontWhenRead3D=MotionVectorPerEncCountConveyor3D*(EncoderCountCodeRead−EncoderCountEnd)", [0075]; mapping 3D points from dimensioning time to imaging time using encoder counts and motion vectors to account for temporal differences)
Regarding claims 10 and 15, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1, wherein the pose information comprises a corner of the object in the 3D coordinate space.
(Marrion, Fig. 1B; “As illustrated in FIG. 1B, object 140b is passing dimensioner scan line 155 of dimensioner 120. In some embodiments, dimensioner 120 can determine the height of points on objects (e.g., object 140b) along dimensioner scan line 155. By combining the data obtained for each scan of dimensioner scan line 155 while, e.g., object 140b passes dimensioner scan line 155 of dimensioner 120, height information about the surfaces of object 140b can be determined. Additionally, dimensioner 120 (or machine vision processor 125) can determine the pose of object 140b and/or the pose of one or more of the surfaces of object 140b in the coordinate space of dimensioner 120 based on the data obtained for each scan of dimensioner scan line”, [0033]; corners in the surfaces)
Regarding claims 11 and 16, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 1, wherein the pose information comprises point cloud data.
(Marrion, Fig. 1; “dimensioner 120 can generate dimensioning data (e.g., a point cloud or heights of points on objects 140 above conveyor belt 107, along with pose information) for objects 140”, [0030]; linking “point cloud” to dimensioning data provided with “pose information,”)
Regarding claim 13, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the system according to claim 12, further comprising:
a conveyor configured to support and transport the object; and
a motion measurement device coupled to the conveyor and configured to measure movement of the conveyor.
(Marrion, Fig. 1; conveyor system 105; conveyor belt 107; position encoder 110; “Position encoder 110 (e.g., a tachometer) can be connected to conveyor system 105 to generate an encoder pulse count that can be used to identify the position of conveyor belt 107 along the direction of arrow 135 … position encoder 110 can increment an encoder pulse count each time conveyor belt 107 moves a pre-determined distance (encoder pulse count distance) in the direction of arrow 135”, [0027]; a conveyor belt is configured to support and transport objects along its length)
Regarding claim 21, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the system according to claim 12, wherein assigning the symbol to the object comprises assigning the symbol to a surface.
(Marrion, "determine one or more surfaces of the one or more objects based on the dimensioning data, coordinates of the one or more surfaces expressed in the shared coordinate space", [0015]; " determine a first surface of the one or more surfaces that intersects the 3D ray; identify an object of the one or more objects that is associated with the first surface; and associate the code with the object", [0009]; assigning the code to the object via the intersecting surface, which encompasses assigning to a surface of the object)
Regarding claim 23, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination further teaches the method according to claim 22, wherein assigning the symbol to the surface comprises determining an intersection between the surface and the image in the 2D coordinate space.
(Marrion, "determine one or more surfaces of the one or more objects based on the dimensioning data, the one or more surfaces expressed in the shared coordinate space; determine a first surface of the one or more surfaces that intersects the 3D ray; identify an object of the one or more objects that is associated with the first surface; and associate the code with the object."; assigning via determining intersection between surface and ray (back-projected from 2D image), which aligns with intersection in 2D space relationships)
Claim(s) 7-8 and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Marrion et al (US20150036876) in view of Mishra et al (US20210233269) and further in view of Tran et al (US2002/0118873).
Regarding claims 7 and 19, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination does not expressly disclose but Tran teaches the method according to claim 1, wherein the image includes a plurality of objects, the method further comprising:
determining whether the plurality of objects overlap in the image.
(Tran, “FIG. 1, a system 10 for detecting multiple object conditions, such as side-by-side and overlapped parcels or packages on a package conveyor, is shown”, [0025])
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the teachings of Tran into the modified system or method of Marrion and Marrion in order to enable a code symbol assigning system capable of correctly associate the code symbol with the correct object in a situation of overlapping objects. The combination of Marrion, Marrion and Tran also teaches other enhanced capabilities.
Regarding claims 8 and 20, the combination of Marrion and Marrion teaches its/their respective base claim(s).
The combination of Marrion, Marrion and Tran teaches the method according to claim 1,
wherein the image includes the object having a first boundary with a margin and a second object having a second boundary with a second margin, and
the method further comprising: determining whether the first boundary and the second boundary overlap in the image.
(Tran, FIG. 1, “Each captured image is processed using the machine vision computer by first windowing each parcel using a Region of Interest (ROI). The processing continues by counting the number of edges appearing in the ROI. The presence of other than a single parcel condition is determined if the number of edges exceeds four”, [0010]; it is natural that image quality varies depending on the objects and imaging conditions; it is apparent that the size of edges of an object may appear thinner or thicker due to different qualities; e.g., the size of edges of an object must be sufficiently thick in order for being recognizable, “If the edge size exceeds the threshold derived from the boundary size, then the edge is counted”, [0037]; obviously, if the objects are close proximity in the image and edges of the objects are too thick and not well defined (or so called edge margins), less edges may be counted and multiple objects may be perceived as part of the same object, i.e., overlapped objects)
Allowable Subject Matter
Claim(s) 3 and 18 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening Claim(s).
The following is a statement of reasons for the indication of allowable subject matter:
Claim(s) 3 and 18 recite(s) limitation(s) related to determining symbol assignment deviations from aggregating assigned symbols associated with a plurality of images. There are no explicit teachings to the above limitation(s) found in the prior art cited in this office action and from the prior art search.
Response to Arguments
Applicant's arguments (appeal brief) filed on 1/12/2026 with respect to one or more of the pending claims have been fully considered but are moot in view of the new ground(s) of rejection.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JIANXUN YANG whose telephone number is (571)272-9874. The examiner can normally be reached on MON-FRI: 8AM-5PM Pacific Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached on (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272- 1000.
/JIANXUN YANG/
Primary Examiner, Art Unit 2662 2/17/2026