/GREGORY A MORSE/ Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 23 October 2025 have been fully considered. Examiner thanks Applicant for the thorough review of the prior office action. However, the arguments presented by Applicant are not persuasive. Applicant’s arguments are paraphrased below:
The prior art cited fails to disclose decoding an indicium in the first image and at least a portion of an indicium in the second image, as a “teaching of item information being associated with a barcode does not expressly or inherently teach a decoding of an indicium in a given image”.
The prior art cited only notes the capture of different FOVs of different imagers, but this does not expressly or implicitly indicate capture of the same particular indicium of the same item between both images.
The prior art cited does not disclose calculating an ROI from the second image and used within a clustering step to create an ROI image, and only discloses a distance-based clustering approach.
Respectfully, Examiner disagrees.
Regarding Applicant’s first argument, Applicant asserts that the “teaching of item information being associated with a barcode does not expressly or inherently teach a decoding of an indicium in a given image”. However, prior art reference Krishnamurthy further discloses decoding of an indicium specifically directed to the scanning of an item’s barcode across frames (para. 0079, wherein the device of Krishnamurthy may detect barcodes through image processing methods including OCR and object detection algorithms, and wherein this detection, scanning, and identification of the barcode, as additionally discussed in para. 0062, constitutes the “decoding” process of the instant application). Thus, Krishnamurthy does disclose the decoding of an indicium and the subsequent recovery of information associated with the decoded indicium, more specifically, payload information.
Regarding Applicant’s second argument, Applicant asserts that there is no mechanism by which the detection of the same indicium in both fields of view can be guaranteed. However, Krishnamurthy does disclose an item tracking device, which contains both a region-of-interest cropping feature and a location detection feature employing a homography operation. As specified within paragraphs 123-132 (the required prior steps to the steps of paragraphs 132-136 cited in the prior OA), the multiple cameras having multiple, different fields-of-view (FOVs), are organized in a particular position and orientation as to capture multiple angles of each of the angles. As a result, these multiple cameras generated multiple images of each image from a variety of FOVs. Subsequently, to align and match items detected across different images and positions, a homography is calculated and images are aligned according to their detected features (para. 0130). The indicia across different frames are used as an alignment feature, and regions of interests are segmented out and pixel locations are tracked across images to find corresponding (matching) indicia. Thus, Krishnamurthy does disclose finding the same indicium across both images in both fields of view.
Regarding Applicant’s third argument, Applicant asserts that the prior art cited in the OA is irrelevant as it does not teach decoding an indicium and subsequent detection of the decoding of the indicium. However, according to the broadest reasonable interpretation of the claim, the disclosure of Krishnamurthy does exactly this. Examiner submits that Applicant does not issue further limitations to what the indicium may comprise (not specifically an optical code, item feature, color, shape, or brand logo, which are all encompassed as reasonable interpretations of the term “indicium”, as stated in the prior OA). As a result, and under the broadest reasonable interpretation:
the detection of an image feature across multiple images may be the recognition of an image feature by a CNN as stated in paras. 0075-0076 of Krishnamurthy (wherein the image feature may be any one of, but not limited to, an optical code, item feature, color, shape, or barcode)
the identification of a barcode as an indicium, specifically, and the subsequent scanning process of determining payload information associated with the item by scanning a barcode, are disclosed within paras. 0062 and 0079 of Krishnamurthy
Thus, under the broadest interpretation of the terms “indicium” and “decoding”, the ordinarily skilled artisan would appreciate that the method of Krishnamurthy would be seen as “identifying an object of interest associated with the indicium in the second image based upon a location of the at least the portion of the indicium in the second image”.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 7-14, 17-23, and 25 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Krishnamurthy et al. (US PG Pub 20220414375 A1, hereafter referred to as Krishnamurthy).
Regarding claim 1, Krishnamurthy discloses a method for object tracking, the method comprising: receiving, from a first optical imaging assembly having a first field of view (FOV) (para. 0025, wherein the optical imaging assembly is an individual camera which is a part of a larger system which contains a plurality of cameras, and wherein each camera monitors a different field of view in images acquired, as disclosed within “relative camera positions…e.g., orientation, viewing angle, distance”), a first image captured over the first FOV (para. 0008, “a first image of a first barcode fragment applied to a first object captured by a camera at a first time and identifying, by the processor, a first position of the first barcode fragment based on the first image”); receiving, from a second optical imaging assembly having a second FOV, a second image captured over the second FOV (paras. 0063-0065, wherein multiple cameras are placed within the enclosure containing the tray with the items, with different angles and perspectives to capture different aspects of the enclosure); decoding an indicium in the first image (paras. 0056-0058, wherein the indicium is information about the item which may comprise text, logos, branding, colors, and barcodes among other key features); detecting at least a portion of the indicium in the second image (paras. 0062-0064, 0079, and fig. 2A, wherein the first and second images may be taken by multiple different images with different fields of view); and identifying an object of interest associated with the indicium in the second image based upon a location of the at least the portion of the indicium in the second image (paras. 0079, 0123-0136, and 0142-0143, wherein a region of interest from the second image is calculated, and subsequently used within a clustering step to create an ROI image).
Regarding claim 11, Krishnamurthy discloses a system for object tracking (para. 0051), the system comprising: a memory (para. 0090), a processing device, coupled to the memory (para. 0090); a first optical imaging assembly having a first field of view (FOV), configured to capture a first image over the first FOV (para. 0008, “a first image of a first barcode fragment applied to a first object captured by a camera at a first time and identifying, by the processor, a first position of the first barcode fragment based on the first image”); and a second optical imaging assembly having a second FOV, configured to capture a second image over the second FOV (paras. 0063-0065, wherein multiple cameras are placed within the enclosure containing the tray with the items, with different angles and perspectives to capture different aspects of the enclosure), wherein the processing device is configured to decode an indicium in the first image (paras. 0056-0058, wherein the indicium is information about the item which may comprise text, logos, branding, colors, and barcodes among other key features), detect at least a portion of the indicium in the second image (paras. 0062-0064, 0079, and fig. 2A, wherein the first and second images may be taken by multiple different images with different fields of view), and identify an object of interest associated with the indicium in the second image based upon a location of the at least the portion of the indicium in the second image (paras. 0079, 0123-0136, and 0142-0143, wherein a region of interest from the second image is calculated, and subsequently used within a clustering step to create an ROI image).
Regarding claim 21, Krishnamurthy discloses a machine vision device, comprising: a memory (para. 0090); a processing device, coupled to the memory (para. 0090); a first optical imaging assembly having a first field of view (FOV), configured to capture a first image over the first FOV (para. 0008, “a first image of a first barcode fragment applied to a first object captured by a camera at a first time and identifying, by the processor, a first position of the first barcode fragment based on the first image”); and a second optical imaging assembly having a second FOV, configured to capture a second image over the second FOV (paras. 0063-0065, wherein multiple cameras are placed within the enclosure containing the tray with the items, with different angles and perspectives to capture different aspects of the enclosure), wherein the processing device is configured to decode an indicium in the first image (paras. 0056-0058, wherein the indicium is information about the item which may comprise text, logos, branding, colors, and barcodes among other key features), detect at least a portion of the indicium in the second image (paras. 0062-0064, 0079, and fig. 2A, wherein the first and second images may be taken by multiple different images with different fields of view), and identify an object of interest associated with the indicium in the second image based upon a location of the at least the portion of the indicium in the second image (paras. 0079, 0123-0136 and 0142-0143, wherein a region of interest from the second image is calculated, and subsequently used within a clustering step to create an ROI image).
Regarding claims 2, 12, and 22, Krishnamurthy discloses all limitations of claims 1, 11, and 21, respectively. Krishnamurthy further discloses wherein the detecting, as employed by the processing device within the machine vision device includes a visual identification of the indicium in the second image as having similar visual features to the indicium in the first image (paras. 0078-0081, 0112, 0134-0142, 0157-0158, and 0188, wherein the indicia of the items are tracked and identified using the method of para. 0112, the identification process of para 0188 is used to obtain a vector for each item, and the images and process of 0188 is used to identify similarity according to 0134-0142 and 0157-0158).
Regarding claims 3 and 13, Krishnamurthy discloses all limitations of claims 1 and 11, respectively. Krishnamurthy further discloses wherein the method of claim 3 and the processing device of claim 13 are further directed to decoding at least a portion of the indicium in the second image, wherein the detecting includes: comparing payload data obtained from the decoding of the indicium in the first image with payload data obtained from the decoding of the at least the portion of the indicium in the second image to determine if at least a partial payload match exists (paras. 0130-0143, wherein matching of indicia according to different characteristics constitutes a partial payload match); and determining that the indicium in the second image has been located responsive to a determination that the at least the partial payload match exists (paras. 0142-0143, wherein the new ROI image created through pixel correspondence is generated upon determination of at least a partial match).
Regarding claims 4 and 14, Krishnamurthy discloses all limitations of claims 1 and 11, respectively. Krishnamurthy further discloses wherein the method of claim 1 and the corresponding device of claim 11 further comprise decoding at least a portion of the indicium in the second image (paras. 0135 and 0142-0143); compare payload data obtained from the decoding of the indicium in the first image with payload data obtained from the decoding of the at least the portion of the indicium in the second image to determine if a complete payload match exists (paras. 0130-0143, wherein matching of indicia according to different characteristics constitutes a partial payload match)); and determine that the indicium in the second image has been located responsive to a determination that the complete payload match exists (para. 0131, match identification).
Regarding claims 7 and 17, Krishnamurthy discloses all limitations of claims 1 and 11, respectively. Krishnamurthy also discloses wherein the method of claim 1 and the corresponding device of claim 11 further comprise querying a database with payload data from the indicium in the first image (para. 0112, wherein the indicium is any of (but not limited to) a barcode, pattern, or textual identifier, and wherein the query is directed towards a database which contains item information); obtaining one or more first characteristics of an item associated with the payload data, wherein the first characteristics include at least one of a shape, a color, a curvature, a texture, a visual pattern, or a size (para. 0112, wherein the visual pattern is a barcode, and branding, colors, and text are also characteristics identifying payload data); discerning one or more second characteristics of the object of interest corresponding with the first characteristics (para. 0112, wherein secondary characteristics, as defined in para. 0054 of the Specification of the instant application, may be of the same type as the first characteristics, and wherein the visual pattern is a barcode, and branding, colors, and text are also characteristics identifying payload data); and comparing the first characteristics with the second characteristics to determine whether the object of interest is the item associated with the payload data (para. 0112-0113, wherein the item tracking device matches items based on image processing, region of interest, and identified characteristics).
Regarding claims 8 and 18, Krishnamurthy discloses all limitations of claims 7 and 17, respectively. Krishnamurthy further discloses wherein the processing device is further configured to transmit an alert responsive to a determination that the object of interest is not the item associated with the payload data (para. 0084, wherein the alert is generated when an item on the tray is a “prohibited” item, unavailable or non-purchasable).
Regarding claims 9 and 19, Krishnamurthy discloses all limitations of claims 7 and 17, respectively. Krishnamurthy further discloses employing image data from the first optical imaging assembly and image data from the second optical imaging assembly to train an artificial intelligence model responsive to a determination that the object of interest is the item associated with the payload data (paras. 0076-0078 for training a machine learning algorithm, specifically a neural network, and paras. 0142-0143 for item identification within the platform that the item is placed on).
Regarding claims 10, 20 and 25, Krishnamurthy discloses all limitations of claims 1, 11, and 21, respectively. Krishnamurthy further discloses wherein the first optical imaging assembly and the second optical imaging assembly are contained in a housing with a base portion and a raised portion, wherein the base portion includes a substantially horizontal platter area with a substantially horizontal window, and the raised portion includes a generally upright window (para. 0061-0065, wherein the description of the enclosure is directed towards the two optical imaging assemblies, the housing, the base, the raised portion, the platform, and windows).
Regarding claim 23, Krishnamurthy discloses all limitations of claim 1. Krishnamurthy further discloses wherein the processing device is further configured to decode at least a portion of the indicium in the second image (para. 0075, wherein the second image is captured and features, including the barcode, are captured and analyzed).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5-6, 15-16, and 24 are rejected under 35 U.S.C. 103 as being unpatentable over Krishnamurthy in further view of Chakravarty et al. (US Patent. No. 11361536 B2, hereafter referred to as Chakravarty).
Regarding claims 5, 15, and 24, Krishnamurthy discloses all limitations of claim 1, 11, and 21, respectively. Krishnamurthy further discloses wherein data from the first optical imaging assembly and data from the second optical imaging assembly are sent to a module that is configured with an imaging algorithm (paras. 0122-0143, wherein the input is multiple images from different angles, and wherein the module with the imaging algorithm consists of computing a homography and subsequent object detection for items in the enclosure).
Krishnamurthy does not disclose wherein there are two discrete image algorithms, and the data from the first optical imaging assembly and the data from the second optical imaging assembly are input into separate algorithms.
However, Chakravarty discloses wherein there are two discrete image algorithms, and the data from the first optical imaging assembly and the data from the second optical imaging assembly are input into separate algorithms (paras 7, 12, and 14, wherein the two parallel neural networks consist of different image data). Specifically, Chakravarty discloses a system and device for purchasing/checking out at retail establishment, wherein the detection comprises identification of item removal off a shelf for faster and more efficient purchase. Therefore, both Krishnamurthy and Chakravarty disclose checkout-adjacent methods utilizing multi-image perspectives and item recognition and identification. Thus, it would have been obvious for one having ordinary skill in the art prior to the effective filing date of the claimed invention to have utilized the parallel neural network architecture of Chakravarty within the item identification method of Krishnamurthy as a simple substitution of one artificial intelligence/deep learning image detection and recognition method for another with the predictable result of better item identification from different camera classes, which might not as susceptible to errors as a homography.
Regarding claims 6 and 16, Krishnamurthy and Chakravarty discloses all limitations of claim 5 and 15, respectively. Krishnamurthy further discloses wherein the imaging algorithm decodes the indicium in the first image and wherein the algorithm detects an object in the second image (paras. 0062, 0075, and 0142-0143, wherein para. 0062 recites scanning the barcode (the indicium), para. 0075 recites capturing an image, and paras. 0142-0143 recite identifying objects in the second image).
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROHAN TEJAS MUKUNDHAN whose telephone number is (571)272-2368. The examiner can normally be reached Monday - Friday 9AM - 6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at 5712723838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROHAN TEJAS MUKUNDHAN/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698