DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/09/2026 have been considered by the examiner and placed in Applicant’s file.
Response to Arguments
Applicant’s arguments, filed 11/10/2025, with respect to the claim 1-3, 5-11 and 13-16 have been fully considered but are moot because the arguments do not apply to the current references and current combinations of references being used in the current rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 5-7, 9, 11, 13-16 are rejected under 35 U.S.C. 103 as being unpatentable over CHEN et al. (US 20190371134 A1), hereinafter referenced as CHEN in view of SATO (US 20160180509 A1), hereinafter referenced as SATO.
Regarding claim 1, CHEN explicitly teaches an information processing system (Fig. 1, #100 called a self-checkout system. Paragraph [0027]-CHEN discloses a self-checkout system 100 includes a customer abnormal behavior detection device 110, a product identification device 120 and a platform 130. A clearly visible checkout area 132 is included on the platform 130 for the customer to place the products the self-checkout system includes a product identification device and a customer abnormal behavior detection device (wherein the product identification and customer abnormal behavior detection devices may be interconnected). In paragraph [0031]-CHEN discloses the product identification device 120 may include a processor 122, a storage device 124, an image capturing device 126 and/or a display device 128. Please also read paragraph [0028, and 0040-0044]) comprising:
an imager (Fig. 1-2, #116, #126, #212, #214, and #222 called an image capturing device. Paragraph [0031 and 0040-0044]. Further in paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222);
a controller (Fig. 1-2, #112, #122 and #216 called a processor. Paragraph [0031 and 0040-0044]) configured to perform processing based on an image captured by the imager (Fig. 1. Paragraph [0031]-CHEN discloses the product identification device 120 may include a processor 122, a storage device 124, an image capturing device 126 and/or a display device 128. The processor 122 may be a general-purpose computer central processing unit (CPU) that provides various functions by reading and executing programs or commands stored in the storage device. The storage device 124 is configured to store programs for the operation of the product identification device 120, including, for example, a part or all of a product object segmentation program, a product feature identification program, a product placement determination program, a product facing direction determination program and a product connection detection program. Please also read paragraph [0040-0044]); and
Although CHEN explicitly teaches a storage (Fig. 1-2, #114 and #124 called a storage device. Paragraph [0032]-CHEN discloses the storage device 124 may also store a plurality of databases, and these databases are used to store a plurality of checkout behavior data and deep learning data. Please also see Fig. 2 and read paragraph [0040-0044]) configured to store information regarding a plurality of articles (Fig. 1A. Paragraph [0032]-CHEN discloses the storage device 124 may also include one database for storing a plurality of product data and deep learning data), wherein the controller is configured to perform:
recognition processing of an object included in the image (Fig. 5. Paragraph [0054]-CHEN discloses in step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730) In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed). Please also read paragraph [0036-0037 and 0039]);
estimation process of a cause of failure of the recognition process when the recognition process fails (Fig. 5. Paragraph in paragraph [0060]-CHEN discloses the product classification program calculates the classification result confidence value of the product classification based on the product image features. If the classification result confidence value is less than the threshold, step S720 is then performed. In paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. In paragraph [0062]-CHEN discloses if it is determined that a number of the features of the product facing up image is insufficient or too small, it is then determined that the product image has the surface with fewer features so the product cannot/is hard to be identified properly. Additionally in paragraph [0063]-CHEN discloses after the product facing direction determination program is executed, if it is determined that the number of the features on the surface of the product facing up is sufficient, it may be determined that the product is lying flat on the platform. Next, the processor loads the product connection detection program so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. Please also read [0036-0037, 0039 and 0056]); and
post-processing including execution at least one of changing imaging conditions of the imager and notifying users according to a result obtained through the estimation process (Fig. 5. Paragraph [0042]-CHEN discloses the locations of the image capturing devices 212 and 214, the image capturing device 222 or the projection apparatus 224 may all be adjusted and may be shared and used by the others based on the demands. In paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then performed (wherein a prompt message may also be sent for when products are connected). Please also read paragraph [0036-0037, 0039, 0063 and 0065-0066]), and the controller is further configured to, in the recognition process:
detect the object in the image and identify the object as one of a plurality of certain articles (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730). In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. In paragraph [0060]-CHEN discloses in step S710, the classification result confidence value is generated. When the classification result confidence value indicates that the confidence level is high or the product may be determined based on the classification result confidence value, it is not required to perform step S720 subsequently. If the classification result confidence value is less than the threshold, step S720 is then performed. Please also read paragraph [0036-0037 and 0060-0062]), and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified (Fig. 7. Paragraph In paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. In paragraph [0062]-CHEN discloses the product placement determination program can determine the facing direction of the product placed on the platform), and when the controller estimates, in the estimation process on the basis of the imaging direction (Fig. 7. Paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. After executing the product feature identification program, the processor loads the product placement determination program stored in storage device into the memory device for execution. The product placement determination program is used to determine whether the object placed on the platform is the product, whether a surface of the product placed on the platform facing up is a surface with fewer features, or whether the product is placed in such a way that clear features can be captured by the image capture unit of the platform), that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed (Fig. 5. Paragraph [0062]-CHEN discloses when it is determined that the product image has the surface with fewer features (i.e., the number of the features is insufficient for identification), it is not required to perform step S730 but to have the customer notified to adjust the facing direction of the product being placed. Further in paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then preformed. Please also read [0036-0037 and 0039]).
CHEN fails to explicitly teach a storage configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured, and the controller is further configured to, in the recognition process: detect the object in the image and identify the object as one of a plurality of certain articles, and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
However, SATO explicitly teaches a storage (Fig. 8, #362 called a commodity database. Paragraph [0130]-SATO discloses the similar commodity database 362 includes a first commodity ID column 362a, a first direction column 362b, a second commodity ID column 362c, a second direction column 362d, a feature image column 362e, a feature direction column 362f, and a rotating axis vector column 362g) configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions (Fig. 8. Paragraph [0132]-SATO discloses the first direction column 362b is a column to store the direction to take an image of a reference image of one of the commodity between two similar candidate commodities, and the second direction column 362d is a column to store the direction to take an image of a reference image of the other commodity. In paragraph [0133]-SATO discloses the feature image column 362e is a column to store the link of the image of the reference images of one of the commodities between two similar candidate commodities, in which the similarity degree of the two candidate commodities is the smallest, and so a difference in feature is noticeable. In paragraph [0134]-SATO discloses the feature direction column 362f is a column to store the direction vector of the reference image of one of the commodities between two similar candidate commodities, in which the similarity degree of the two candidate commodities is the smallest, and so a difference in feature is noticeable. In paragraph [0135]-SATO discloses the rotating axis vector column 362g is a column to store the vector of a rotating axis when the target commodity is directed in the direction having the smallest similarity degree) in which respective surfaces of the articles are captured (Fig. 7. Paragraph [0121]-SATO discloses the reference images of the commodity in the six directions are recorded beforehand, whereby this commodity can be recognized easily when a commodity is held in any direction. In paragraph [0122]-SATO discloses FIGS. 7A to 7F illustrate reference images of a stuffed bear from the six directions. Please also read paragraph [0144-0147]),
and the controller is further configured to, in the recognition process:
detect the object in the image (Fig. 9. Paragraph [0139]-SATO discloses at Step S12, the object detection unit 91 performs object recognition processing to the frame images acquired by the image acquisition unit 90 to try to recognize (detect) the entire or a part of the object as a commodity) and identify the object as one of a plurality of certain articles (Fig. 9. Paragraph [0140]-SATO discloses at Step S13, the object detection unit 91 determines whether recognition of the entire or a part of the object as a commodity is successfully performed or not. When the object detection unit 91 determines that the object as a commodity is successfully recognized (Yes), the procedure proceeds to Step S14, and when it determines that the object as a commodity is not successfully recognized (No), the procedure returns to Step S11. In paragraph [0141]-SATO discloses at Step S14, the similarity degree calculation unit 92 reads a feature amount of the commodity from the entire or a part of the image of the commodity. In paragraph [0142]-SATO discloses at Step S15, based on whether there is a carried commodity having a similarity degree of the threshold T or more in the feature amount file 361 or not, the similarity degree determination unit 93 determines whether the commodity is specified uniquely or not. When the similarity degree determination unit 93 determines that the carried commodity having a similarity degree of the threshold T or more is uniquely specified, the procedure proceeds to Step S22, and when it is determined that there is a plurality of candidate commodities, the procedure proceeds to Step S16. When there are no candidate commodities, the procedure returns to Step S11. In paragraph [0143]-SATO discloses the processing from Step S16 to Step S21 is a series of processing relating to navigation to rotate the commodity), and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process (Fig. 9. Paragraph [0144]-SATO discloses at Step S16, the guidance unit 94 determines whether the similar commodity database 362 includes rotating direction information on the candidate commodities or not. That is, the guidance unit 94 searches for the combination of the candidate commodities having the highest and the second highest similarity degrees from the similar commodity database 362, and determines the presence of the rotating direction information on the candidate commodities based on the presence or not of the corresponding record. When the guidance unit 94 determines that the similar commodity database 362 includes rotating direction information on the candidate commodities (Yes), the procedure proceeds to Step S19, and when it is determined that the similar commodity database does not include rotating direction information (No), the procedure proceeds to Step S17. In paragraph [0145]-SATO discloses at Step S17, the guidance unit 94 compares reference images in the same direction for the two candidate commodities, and specifies the reference images between which a difference in feature becomes noticeable. Specifically, the guidance unit compares reference images in the six directions of the stuffed rabbit and the stuffed bear. Herein, these commodities have the smallest similarity degree in their front images, and so a difference in feature is noticeable), the post-processing includes a notification to the user that the imaging direction of the object be changed (Fig. 9. Paragraph [0146]-SATO discloses at Step S18, the guidance unit 94 specifies the imaging direction in which a difference in feature is noticeable, and records the same as well as the combination information on the two candidate commodities in the similar commodity database 362. Thereby, when similar candidate commodities are detected later, guidance can be displayed promptly without calculating their similarity degree. In paragraph [0147]-SATO discloses at Step S19, the guidance unit 94 calculates a rotating axis vector based on the cross product of the imaging direction vector in which a difference in feature is noticeable and the imaging direction vector of the reference images. Further the guidance unit 94 records the calculated rotating axis vector in the similar commodity database 362. In paragraph [0149]-SATO discloses at Step S21, the guidance unit 94 displays an arrow on the screen to navigate the rotation of the commodity that is held by the operator. When the processing at Step S21 ends, the procedure returns to Step S11. In paragraph [0150]-SATO discloses the processing from Step S22 to Step S23 is the processing when the commodity is specified uniquely).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHEN of having an information processing system, with the teachings of SATO of having a storage configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured, and the controller is further configured to, in the recognition process: detect the object in the image and identify the object as one of a plurality of certain articles, and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
Wherein CHEN’s system having a storage configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured, and the controller is further configured to, in the recognition process: detect the object in the image and identify the object as one of a plurality of certain articles, and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
The motivation behind the modification would have been to obtain a system that reduces the theft rate, improves the flow of checkout and improves the detection, identification and tracking of items, since both CHEN and SATO concern image analysis and automated retail checkout systems/methods. Wherein CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while SATO’s systems and methods provide an automated self-checkout that improves the ability to narrow down and recognize commodities without interrupting the flow of sales. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and SATO et al. (US 20160180509 A1), Abstract and Paragraph [0014-0015 and 0216].
Regarding claim 3, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN further teaches wherein the notifying includes a notification about a change to at least one of a position of the object or an orientation of the object through a visual indication or a sound (Fig. 1B. Paragraph [0037]-CHEN discloses if the products within the viewing angle of the image capturing device 126 fail to show enough features of the products (e.g., the products are not placed correctly, or the products are stacked up on top of each other), the product identification device 120 can automatically detect such situation and display/project a prompt of “Please turn over or separate the products” through the monitor or the projector. The prompt may use any prompt content that can draw attentions (e.g., colors or texts) to remind the customer. Please also read paragraph [0039, 0056, 0063 and 0065]).
Regarding claim 5, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN further teaches wherein the controller is further configured to, in the recognition process (Fig. 5. Paragraph [0053]-CHEN discloses FIG. 5 is a schematic diagram illustrating a computer vision based product identification process. In paragraph [0054]-CHEN discloses in step S520, the product image feature identification process is performed. In paragraph [0059]-CHEN discloses the product classification may be performed in the product image feature analysis process in step S530. Please also read paragraph [0036-0037]), calculate, a degree of reliability indicating how likely the object is an article identified (Fig. 5. Paragraph [0060]-CHEN discloses in step S710, the classification result confidence value is generated first. The product classification program calculates the classification result confidence value of the product classification based on the product image features. For example, based on the product image features, it can be calculated that, three highest classification result confidence values for the possibility of being Product 1 are 0.956, 0.022 and 0.017, and three highest classification result confidence values for the possibility of being Product 2 are 0.672, 0.256 and 0.043) and determines whether the recognition processing has been successfully completed through comparing the degree of reliability with a first threshold (Fig. 5. Paragraph [0060]-CHEN discloses if the threshold is 0.7, because the highest classification result confidence value for the possibility of being Product 1 is 0.956, it can be determined that the product image feature is Product 1. In an embodiment, when the classification result confidence value indicates that the confidence level is high or the product may be determined based on the classification result confidence value, it is not required to perform step S720 subsequently. If the classification result confidence value is less than the threshold, step S720 is then performed. Please also read paragraph [0061-0062]).
Regarding claim 6, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN further teaches wherein the storage (Fig. 1-2, #114 and #124 called a storage device. Paragraph [0032 and 0040-0044]) is further configured to store information regarding sizes of plurality of articles (Fig. 1-2. Paragraph [0032]-CHEN discloses the storage device 124 may also store a plurality of databases, and these databases are used to store a plurality of checkout behavior data and deep learning data. The storage device 124 may also include one database for storing a plurality of product data and deep learning data), and wherein the controller is further configured to:
calculate, in the recognition process (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation. In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. Whether the product to be identified in the product image is corresponding to the product in the database is determined by, for example, determining whether the product image features of the product to be identified are corresponding to image features of the product stored in the feature database. Please also read paragraph [0057-0058]), a size of the object based on the image, obtains, in the estimation process, information regarding the size of the plurality of articles from the storage, and estimates, in the estimation process, whether an image of the object included in the image is overlapping an edge of the image based on the size of the object and the size of the article (Fig. 7. Paragraph [0063]-CHEN discloses after the product facing direction determination program is executed, if it is determined that the number of the features on the surface of the product facing up is sufficient, it may be determined that the product is lying flat on the platform. Next, the processor loads the product connection detection program stored in the storage device into the memory device for execution, so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. For example, if the aspect ratio of a normal (or the database's) canned drink is 2:1, when it is identified that the canned drink is lying and the aspect ratio of the canned drink is 1:1, it can be determined that the canned drink is connected to another product. The prompt message may be sent to notify the customer to adjust the positions of the products).
Regarding claim 7, CHEN in view of SATO explicitly teach the information processing system according to claim 6, CHEN further teaches wherein the controller is configured to, in the recognition process (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation. In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. Whether the product to be identified in the product image is corresponding to the product in the database is determined by, for example, determining whether the product image features of the product to be identified are corresponding to image features of the product stored in the feature database), calculate a position of the object in the image based on the image (Fig. 5. Paragraph [0057]-CHEN discloses the image feature recognition process in step S520 is described. The image is first processed (e.g., by segmenting a captured product image), and then features of the product image are captured. Based on a platform image 610 being captured, the product object segmentation program segments the product regions from the platform image 610 by the edge detection, increases a contrast between the background and the product based on a brightness feature in the platform image 610, locates a boundary of the product by a using edge detection method such as Sobel edge detection method, uses a run length algorithm to reinforce the boundary and suppress noises, and then segments the product regions after the boundary is determined. After the boundary of the product regions is determined, as shown in the converted platform image 620, coordinates of the product regions can be calculated to obtain a region where the product images exist so that the features of the product images can be located based on the region of the product images. Then, based on these features, the product image feature analysis process of step S530 is performed), and estimate whether the image of the object included in the image is overlapping the edge of the image in consideration of the position (Fig. 7. Paragraph [0059]-CHEN discloses the product classification may be performed in the product image feature analysis process in step S530. This classification process includes a step of setting a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730). In paragraph [0063]-CHEN discloses after the product facing direction determination program is executed, if it is determined that the number of the features on the surface of the product facing up is sufficient, it may be determined that the product is lying flat on the platform. Next, the processor loads the product connection detection program stored in the storage device into the memory device for execution, so as to perform the step S730 of the product connection detection), when a ratio of the size of the object to the size of the article is equal to or lower than a certain value, or when the object is smaller than the article by a certain value or larger (Fig. 7. Paragraph [0063]-CHEN discloses after the product facing direction determination program is executed, if it is determined that the number of the features on the surface of the product facing up is sufficient, it may be determined that the product is lying flat on the platform. Next, the processor loads the product connection detection program stored in the storage device into the memory device for execution, so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. For example, if the aspect ratio of a normal (or the database's) canned drink is 2:1, when it is identified that the canned drink is lying and the aspect ratio of the canned drink is 1:1, it can be determined that the canned drink is connected to another product. The prompt message may be sent to notify the customer to adjust the positions of the products).
Regarding claim 9, CHEN in view of SATO explicitly teach the information processing system according to claim 5, CHEN fail to explicitly teach wherein, the controller is further configured to changes, in the post-processing, the imaging condition to an imaging condition determined in advance for at least one of two or more articles when degrees of reliability of the object for the two or more articles are both equal to or higher than a second threshold in the estimation process.
However, SATO explicitly teaches wherein, the controller is further configured to changes, in the post-processing (Fig. 9. Paragraph [0140]-SATO discloses at Step S13, the object detection unit 91 determines whether recognition of the entire or a part of the object as a commodity is successfully performed or not. When the object detection unit 91 determines that the object as a commodity is successfully recognized (Yes), the procedure proceeds to Step S14, and when it determines that the object as a commodity is not successfully recognized (No), the procedure returns to Step S11. Specifically, Steps S11 to S13 are a series of processing, in which the operator holds the commodity over the camera 27 of the reading window 52, and the commodity identification device 2 successfully detects (recognizes) the object as this commodity. In paragraph [0142]-SATO discloses at Step S15, based on whether there is a carried commodity having a similarity degree of the threshold T or more in the feature amount file 361 or not, the similarity degree determination unit 93 determines whether the commodity is specified uniquely or not. When the similarity degree determination unit 93 determines that the carried commodity having a similarity degree of the threshold T or more is uniquely specified, the procedure proceeds to Step S22), the imaging condition to an imaging condition determined in advance for at least one of two or more articles (Fig. 9. Paragraph [0142]-SATO discloses when it is determined that there is a plurality of candidate commodities, the procedure proceeds to Step S16. Specifically, the similarity degree determination unit 93 determines a stuffed rabbit and a stuffed bear as the candidate commodities. In paragraph [0144]-SATO discloses the guidance unit 94 searches for the combination of the candidate commodities having the highest and the second highest similarity degrees from the similar commodity database 362, and determines the presence of the rotating direction information on the candidate commodities based on the presence or not of the corresponding record (wherein the procedure proceeds to step S17 if a rotating direction information does not exist or step S19 if rotating direction information does exist). In paragraph [0145]-SATO discloses at Step S17, the guidance unit 94 compares reference images in the same direction for the two candidate commodities, and specifies the reference images between which a difference in feature becomes noticeable. Specifically, the guidance unit compares reference images in the six directions of the stuffed rabbit and the stuffed bear. These commodities have the smallest similarity degree in their front images, and so a difference in feature is noticeable there. In paragraph [0146]-SATO discloses at Step S18, the guidance unit 94 specifies the imaging direction in which a difference in feature is noticeable, and records the same as well as the combination information on the two candidate commodities in the similar commodity database 362) when degrees of reliability of the object for the two or more articles are both equal to or higher than a second threshold in the estimation process (Fig. 9. Paragraph [0147]-SATO discloses at Step S19, the guidance unit 94 calculates a rotating axis vector based on the cross product of the imaging direction vector in which a difference in feature is noticeable and the imaging direction vector of the reference images. In paragraph [0149]-SATO discloses at Step S21, the guidance unit 94 displays an arrow on the screen to navigate the rotation of the commodity that is held by the operator. When the processing at Step S21 ends, the procedure returns to Step S11. In paragraph [0150]-SATO discloses the processing from Step S22 to Step S23 is the processing when the commodity is specified uniquely).
Regarding claim 11, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN further teaches wherein the storage (Fig. 1-2, #114 and #124 called a storage device. Paragraph [0031 and 0040-0040]) is further configured to stores outer shape information regarding an outer shape of each of the plurality of articles (Fig. 7D. Paragraph [0032]-CHEN discloses the storage device 124 may also store a plurality of databases, and these databases are used to store a plurality of checkout behavior data and deep learning data. The storage device 124 may also include one database for storing a plurality of product data and deep learning data. Please also read paragraph [0054-0058 and 0063]), and wherein, the controller (Fig. 1, #112, #122 and #216 called a processor. Paragraph [0031 and 0040-0044]) is further configured to notify, in the post-processing (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730). In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. Please also read paragraph [0036-0037 and 0063]), the user that arrangement of the object be changed (Fig. 5. Paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In an embodiment, in step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then preformed. Please also read paragraph [0036-0037 and 0063]), when estimating, in the estimation process on a basis of an outer shape of the object recognized in the image captured by the imager and outer shape information regarding the article stored in the storage (Fig. 5. Paragraph [0057]-CHEN discloses the product object segmentation program segments the product regions from the platform image 610 by the edge detection, locates a boundary of the product by a using edge detection method, uses a run length algorithm to reinforce the boundary, and then segments the product regions after the boundary is determined. After the boundary of the product regions is determined, coordinates of the product regions can be calculated to obtain a region where the product images exist so that the features of the product images can be located based on the region of the product images. Then, based on these features, the product image feature analysis process of step S530 is performed. Please also read paragraph [0055-0056 and 0063]), that the article identified is overlapping another article (Fig. 7D. Paragraph [0063]-CHEN discloses the processor loads the product connection detection program stored in the storage device into the memory device for execution, so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. For example, if the aspect ratio of a normal (or the database's) canned drink is 2:1, when it is identified that the canned drink is lying and the aspect ratio of the canned drink is 1:1, it can be determined that the canned drink is connected to another product. The prompt message may be sent to notify the customer to adjust the positions of the products. Please also read paragraph [0036-0037 and 0056]).
Regarding claim 13, CHEN in view of SATO explicitly teaches the information processing system according to claim 1, CHEN further teaches wherein the controller is configured to calculates, in the recognition process on a basis of the image (Fig. 5. Paragraph [0053]-CHEN discloses FIG. 5 is a schematic diagram illustrating a computer vision based product identification process. The programs for the product identification device 220 of the present embodiment to operate includes, for example, a part or all of the product object segmentation program, the product feature identification program, the product placement determination program, the product facing direction determination program and/or the product connection detection program), height of the object and notifies, in the post-processing (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730) In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. Please also read paragraph [0036-0037 and 0056-0063]) in consideration of the height (Fig. 7D. Paragraph [0063]-CHEN discloses the processor loads the product connection detection program stored in the storage device into the memory device for execution, so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. For example, if the aspect ratio of a normal (or the database's) canned drink is 2:1, when it is identified that the canned drink is lying and the aspect ratio of the canned drink is 1:1, it can be determined that the canned drink is connected to another product. The prompt message may be sent to notify the customer to adjust the positions of the products (wherein the ratio of an object includes a height determination). Please also read paragraph [0054-0058]), the user of a method for changing a direction of the object (Fig. 7D. Paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In an embodiment, in step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then preformed. Please also read paragraph [0036-0037 and 0060-0063]).
Regarding claim 14, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN further teaches wherein the controller is further configured to perform the estimation processing (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730) In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed (wherein the process proceeds to S550 to adjust the position of the object if features are unable to be determined or if multiple products are identified as candidates)). Please also read paragraph [0036-0037 and 0039]) using two or more of a size of the object, a position of the object in the image (Fig. 5. Paragraph [0050]-CHEN discloses by using a YOLO model to perform one CNN on the image, a category and a position of the object may be determined. Further in paragraph [0057]-CHEN discloses the product object segmentation program segments the product regions from the platform image 610 by the edge detection, locates a boundary of the product by a using edge detection method, uses a run length algorithm to reinforce the boundary, and then segments the product regions after the boundary is determined. After the boundary of the product regions is determined, coordinates of the product regions can be calculated to obtain a region where the product images exist so that the features of the product images can be located based on the region of the product images. Then, based on these features, the product image feature analysis process of step S530 is performed. Please also read paragraph [0063]), an outer shape of the object (Fig. 5. Paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation. Please also read paragraph [0057 and 0063]), an imaging direction of the object (Fig. 5. Paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. After executing the product feature identification program, the processor loads the product placement determination program stored in storage device into the memory device for execution. The product placement determination program is used to determine whether the object placed on the platform is the product, whether a surface of the product placed on the platform facing up is a surface with fewer features, or whether the product is placed in such a way that clear features can be captured by the image capture unit of the platform), and height of the object calculated through the recognition process (Fig. 7D. Paragraph [0063]-CHEN discloses the processor loads the product connection detection program stored in the storage device into the memory device for execution, so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. For example, if the aspect ratio of a normal (or the database's) canned drink is 2:1, when it is identified that the canned drink is lying and the aspect ratio of the canned drink is 1:1, it can be determined that the canned drink is connected to another product. The prompt message may be sent to notify the customer to adjust the positions of the products (wherein the product connection detection calculates the height, size and position of the object)) and a sum of weights of objects measured by a sensor (Fig. 7D. Paragraph [0036]-CHEN discloses a weight detection and/or a depth detection may be used to help identifying the products. Further in paragraph [0039]-CHEN discloses an abnormal checkout behavior determination technology used in the computer vision based self-checkout system includes an active determination for situations like the objects held by the customer not all being placed into the checkout area, the weight of the product not matching the identification result and/or operation errors caused by the customer; and messages that prompt the staff to actively provide assistant for those situations (wherein CHEN explicitly teaches all aspects of the limitation with exception of a sum of weights of objects)).
Regarding claim 15, CHEN explicitly teaches an information processing apparatus (Fig. 1, #100 called a self-checkout system. Paragraph [0027]-CHEN discloses a self-checkout system 100 includes a customer abnormal behavior detection device 110, a product identification device 120 and a platform 130. A clearly visible checkout area 132 is included on the platform 130 for the customer to place the products the self-checkout system includes a product identification device and a customer abnormal behavior detection device (wherein the product identification and customer abnormal behavior devices may be interconnected). In paragraph [0031]-CHEN discloses the product identification device 120 may include a processor 122, a storage device 124, an image capturing device 126 and/or a display device 128. Please also see Fig. 2 and read paragraph [0028 and 0040-0044]) comprising:
a communicator (Fig. 1-2. Paragraph [0032]-CHEN discloses the product identification device 120 may include a network access device. Please also read paragraph [0030]) configured to receive an image captured by an imager (Fig. 1-2, #116, #126, #212, #214, and #222 called an image capturing device. Paragraph [0031]. Further in paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. Please also read paragraph [0040-0044]);
a controller (Fig. 1-2, #112, #122 and #216 called a processor. Paragraph [0031]) configured to perform processing on a basis of the image captured by the imager (Fig. 1-2. Paragraph [0031]-CHEN discloses the processor 122 may be a general-purpose computer central processing unit (CPU) that provides various functions by reading and executing programs or commands stored in the storage device. The storage device 124 is configured to store programs for the operation of the product identification device 120, including, for example, a part or all of a product object segmentation program, a product feature identification program, a product placement determination program, a product facing direction determination program and a product connection detection program. Please also read paragraph [0040-0044]); and
Although CHEN explicitly teaches a storage (Fig. 1-2, #114 and #124 called a storage device. Paragraph [0032]-CHEN discloses the storage device 124 may also store a plurality of databases, and these databases are used to store a plurality of checkout behavior data and deep learning data) configured to store information regarding a plurality of articles (Fig. 1-2. Paragraph [0032]-CHEN discloses the storage device 124 may also include one database for storing a plurality of product data and deep learning data. Please also read paragraph [0040-0044]), wherein the controller is further configured to perform:
a recognition process of an object included in the image (Fig. 5. Paragraph [0053]-CHEN discloses FIG. 5 is a schematic diagram illustrating a computer vision based product identification process. In paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730) In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed). Please also read paragraph [0036-0037 and 0039]);
an estimation process of a cause of failure of the recognition process when the recognition process fails (Fig. 5. Paragraph in paragraph [0060]-CHEN discloses the product classification program calculates the classification result confidence value of the product classification based on the product image features. If the classification result confidence value is less than the threshold, step S720 is then performed. In paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. In paragraph [0062]-CHEN discloses if it is determined that a number of the features of the product facing up image is insufficient or too small, it is then determined that the product image has the surface with fewer features so the product cannot/is hard to be identified properly. Additionally in paragraph [0063]-CHEN discloses after the product facing direction determination program is executed, if it is determined that the number of the features on the surface of the product facing up is sufficient, it may be determined that the product is lying flat on the platform. Next, the processor loads the product connection detection program so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. Please also read [0036-0037, 0039 and 0054-0058]); and
post-processing in which the controller at least changes an imaging condition of the imager or notifies a user in accordance with a result obtained through the estimation process (Fig. 5. Paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then performed (wherein a prompt message may also be sent for when products are connected). Please also read paragraph [0036-0037, 0039, 0055-0058, 0060-0063 and 0065-0066]), and the controller is further configured to, in the recognition process:
detect the object in the image and identify the object as one of a plurality of certain articles (Fig. 5. Paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530). In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. Please also read paragraph [0036-0037 and 0060-0062]), and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified (Fig. 7. Paragraph [0060]-CHEN discloses in step S710, the classification result confidence value is generated. When the classification result confidence value indicates that the confidence level is high or the product may be determined based on the classification result confidence value, it is not required to perform step S720 subsequently. If the classification result confidence value is less than the threshold, step S720 is then performed. In paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. In paragraph [0062]-CHEN discloses the product placement determination program can determine the facing direction of the product placed on the platform), and when the controller estimates, in the estimation process on the basis of the imaging direction, that an imaging direction of the object is a cause of failure of the recognition process (Fig. 7. Paragraph [0061]-CHEN discloses the product placement determination program is used to determine whether the object placed on the platform is the product, whether a surface of the product placed on the platform facing up is a surface with fewer features, or whether the product is placed in such a way that clear features can be captured by the image capture unit of the platform), the post-processing includes a notification to the user that the imaging direction of the object be changed (Fig. 5. Paragraph [0062]-CHEN discloses when it is determined that the product image has the surface with fewer features (i.e., the number of the features is insufficient for identification), it is not required to perform step S730 but to have the customer notified to adjust the facing direction of the product being placed. Further in paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then preformed. Please also read [0036-0037 and 0039]).
CHEN fails to explicitly teach a storage configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured, and the controller is further configured to, in the recognition process: detect the object in the image and identify the object as one of a plurality of certain articles, and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
However, SATO explicitly teaches a storage (Fig. 8, #362 called a commodity database. Paragraph [0130]-SATO discloses the similar commodity database 362 includes a first commodity ID column 362a, a first direction column 362b, a second commodity ID column 362c, a second direction column 362d, a feature image column 362e, a feature direction column 362f, and a rotating axis vector column 362g) configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions (Fig. 8. Paragraph [0132]-SATO discloses the first direction column 362b is a column to store the direction to take an image of a reference image of one of the commodity between two similar candidate commodities, and the second direction column 362d is a column to store the direction to take an image of a reference image of the other commodity. In paragraph [0133]-SATO discloses the feature image column 362e is a column to store the link of the image of the reference images of one of the commodities between two similar candidate commodities, in which the similarity degree of the two candidate commodities is the smallest, and so a difference in feature is noticeable. In paragraph [0134]-SATO discloses the feature direction column 362f is a column to store the direction vector of the reference image of one of the commodities between two similar candidate commodities, in which the similarity degree of the two candidate commodities is the smallest, and so a difference in feature is noticeable. In paragraph [0135]-SATO discloses the rotating axis vector column 362g is a column to store the vector of a rotating axis when the target commodity is directed in the direction having the smallest similarity degree) in which respective surfaces of the articles are captured (Fig. 7. Paragraph [0121]-SATO discloses the reference images of the commodity in the six directions are recorded beforehand, whereby this commodity can be recognized easily when a commodity is held in any direction. In paragraph [0122]-SATO discloses FIGS. 7A to 7F illustrate reference images of a stuffed bear from the six directions. Please also read paragraph [0144-0147]),
and the controller is further configured to, in the recognition process:
detect the object in the image (Fig. 9. Paragraph [0139]-SATO discloses at Step S12, the object detection unit 91 performs object recognition processing to the frame images acquired by the image acquisition unit 90 to try to recognize (detect) the entire or a part of the object as a commodity) and identify the object as one of a plurality of certain articles (Fig. 9. Paragraph [0140]-SATO discloses at Step S13, the object detection unit 91 determines whether recognition of the entire or a part of the object as a commodity is successfully performed or not. When the object detection unit 91 determines that the object as a commodity is successfully recognized (Yes), the procedure proceeds to Step S14, and when it determines that the object as a commodity is not successfully recognized (No), the procedure returns to Step S11. In paragraph [0141]-SATO discloses at Step S14, the similarity degree calculation unit 92 reads a feature amount of the commodity from the entire or a part of the image of the commodity. In paragraph [0142]-SATO discloses at Step S15, based on whether there is a carried commodity having a similarity degree of the threshold T or more in the feature amount file 361 or not, the similarity degree determination unit 93 determines whether the commodity is specified uniquely or not. When the similarity degree determination unit 93 determines that the carried commodity having a similarity degree of the threshold T or more is uniquely specified, the procedure proceeds to Step S22, and when it is determined that there is a plurality of candidate commodities, the procedure proceeds to Step S16. When there are no candidate commodities, the procedure returns to Step S11. In paragraph [0143]-SATO discloses the processing from Step S16 to Step S21 is a series of processing relating to navigation to rotate the commodity), and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article (Fig. 9. Paragraph [0144]-SATO discloses at Step S16, the guidance unit 94 determines whether the similar commodity database 362 includes rotating direction information on the candidate commodities or not. That is, the guidance unit 94 searches for the combination of the candidate commodities having the highest and the second highest similarity degrees from the similar commodity database 362, and determines the presence of the rotating direction information on the candidate commodities based on the presence or not of the corresponding record. When the guidance unit 94 determines that the similar commodity database 362 includes rotating direction information on the candidate commodities (Yes), the procedure proceeds to Step S19, and when it is determined that the similar commodity database does not include rotating direction information (No), the procedure proceeds to Step S17. In paragraph [0145]-SATO discloses at Step S17, the guidance unit 94 compares reference images in the same direction for the two candidate commodities, and specifies the reference images between which a difference in feature becomes noticeable. Specifically, the guidance unit compares reference images in the six directions of the stuffed rabbit and the stuffed bear. Herein, these commodities have the smallest similarity degree in their front images, and so a difference in feature is noticeable), that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed (Fig. 9. Paragraph [0146]-SATO discloses at Step S18, the guidance unit 94 specifies the imaging direction in which a difference in feature is noticeable, and records the same as well as the combination information on the two candidate commodities in the similar commodity database 362. Thereby, when similar candidate commodities are detected later, guidance can be displayed promptly without calculating their similarity degree. In paragraph [0147]-SATO discloses at Step S19, the guidance unit 94 calculates a rotating axis vector based on the cross product of the imaging direction vector in which a difference in feature is noticeable and the imaging direction vector of the reference images. Further the guidance unit 94 records the calculated rotating axis vector in the similar commodity database 362. In paragraph [0149]-SATO discloses at Step S21, the guidance unit 94 displays an arrow on the screen to navigate the rotation of the commodity that is held by the operator. When the processing at Step S21 ends, the procedure returns to Step S11. In paragraph [0150]-SATO discloses the processing from Step S22 to Step S23 is the processing when the commodity is specified uniquely).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHEN of having an information processing apparatus, with the teachings of SATO of having a storage configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured, and the controller is further configured to, in the recognition process: detect the object in the image and identify the object as one of a plurality of certain articles, and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
Wherein CHEN’s apparatus having a storage configured to store information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured, and the controller is further configured to, in the recognition process: detect the object in the image and identify the object as one of a plurality of certain articles, and recognize, on a basis of the image, an imaging direction at a time when the object is the article identified, and when the controller estimates, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
The motivation behind the modification would have been to obtain an apparatus that reduces the theft rate, improves the flow of checkout and improves the detection, identification and tracking of items, since both CHEN and SATO concern image analysis and automated retail checkout systems/methods. Wherein CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while SATO’s systems and methods provide an automated self-checkout that improves the ability to narrow down and recognize commodities without interrupting the flow of sales. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and SATO et al. (US 20160180509 A1), Abstract and Paragraph [0014-0015 and 0216].
Regarding claim 16, CHEN explicitly teaches a method for processing information (Fig. 1. Paragraph [0027]-CHEN discloses a self-checkout system 100 includes a customer abnormal behavior detection device 110, a product identification device 120 and a platform 130. A clearly visible checkout area 132 is included on the platform 130 for the customer to place the products the self-checkout system includes a product identification device and a customer abnormal behavior detection device (wherein the product identification and customer abnormal behavior devices may be interconnected). Further in paragraph [0053]-CHEN discloses FIG. 5 is a schematic diagram illustrating a computer vision based product identification process (wherein the programs include product object segmentation program, the product feature identification program, the product placement determination program, the product facing direction determination program and/or the product connection detection program). Please also see Fig. 2 and read paragraph [0036-0037 and 0039), the method comprising:
obtaining an image captured by an imager (Fig. 1-2, #116, #126 and #222 called an image capturing device. Paragraph [0031]. Further in paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222);
performing a recognition process of an object included in the image (Fig. 5. Paragraph [0054]-CHEN discloses in step S510, the product identification device starts operating and captures a platform image on the platform 230 through the image capturing device 222. In step S520, the product image feature identification process is performed. In paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530, which includes a classification result confidence value (step S710), a step of a product facing direction identification (step S720) and a step of a product connection detection (step S730) In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed). Please also read paragraph [0036-0037 and 0039]);
Although CHEN explicitly teaches storing information regarding a plurality of articles and direction information (Fig. 1-2, #114 and #124 called a storage device. Paragraph [0032]-CHEN discloses the storage device 124 may also store a plurality of databases, and these databases are used to store a plurality of checkout behavior data and deep learning data. The storage device 124 may also include one database for storing a plurality of product data and deep learning data);
performing, when the recognition process fails, an estimation process of a cause of failure of the recognition process (Fig. 5. Paragraph in paragraph [0060]-CHEN discloses the product classification program calculates the classification result confidence value of the product classification based on the product image features. If the classification result confidence value is less than the threshold, step S720 is then performed. In paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. In paragraph [0062]-CHEN discloses if it is determined that a number of the features of the product facing up image is insufficient or too small, it is then determined that the product image has the surface with fewer features so the product cannot/is hard to be identified properly. Additionally in paragraph [0063]-CHEN discloses after the product facing direction determination program is executed, if it is determined that the number of the features on the surface of the product facing up is sufficient, it may be determined that the product is lying flat on the platform. Next, the processor loads the product connection detection program so as to perform the step S730 of the product connection detection. The product connection detection program is used to determine whether multiple products are connected to each other or overlapping with each other through an aspect ratio detection. Please also read [0036-0037 and 0039]); and
performing post-processing in which at least an imaging condition of the imager is changed or a user is notified in accordance with a result obtained through the estimation process (Fig. 5. Paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then performed (wherein a prompt message may also be sent for when products are connected). Please also read paragraph [0036-0037, 0039, 0063 and 0065]), wherein the recognition process includes:
detecting the object in the image and identifying the object as one of a plurality of certain articles (Fig. 5. Paragraph [0055]-CHEN discloses after the product image features are identified, a product image feature analysis process is performed based on those features, as shown by step S530. In step S530, the obtained product image feature (e.g., the shape, the color distribution, the text, the trademark, a barcode position or content) are compared with a feature database, so as to perform a product image identification operation (wherein the classification process (S710-S730) may be implemented as part of S530). In paragraph [0056]-CHEN discloses in step S540, a product identification result verification is performed. Please also read paragraph [0036-0037 and 0060-0062]), and recognizing, on a basis of the image, an imaging direction at a time when the object is the article identified (Fig. 7. Paragraph [0060]-CHEN discloses in step S710, the classification result confidence value is generated. When the classification result confidence value indicates that the confidence level is high or the product may be determined based on the classification result confidence value, it is not required to perform step S720 subsequently. If the classification result confidence value is less than the threshold, step S720 is then performed. In paragraph [0061]-CHEN discloses in step S720, the product facing direction identification is performed. In paragraph [0062]-CHEN discloses the product placement determination program can determine the facing direction of the product placed on the platform), and when estimating, in the estimation process on the basis of the imaging direction, that an imaging direction of the object is a cause of failure of the recognition process (Fig. 7. Paragraph [0061]-CHEN discloses the product placement determination program is used to determine whether the object placed on the platform is the product, whether a surface of the product placed on the platform facing up is a surface with fewer features, or whether the product is placed in such a way that clear features can be captured by the image capture unit of the platform), the post-processing includes a notification to the user that the imaging direction of the object be changed (Fig. 5. Paragraph [0062]-CHEN discloses when it is determined that the product image has the surface with fewer features (i.e., the number of the features is insufficient for identification), it is not required to perform step S730 but to have the customer notified to adjust the facing direction of the product being placed. Further in paragraph [0056]-CHEN discloses if it is determined that the product image features are not corresponding to the image features of the product in the feature database, or it is unable to determine whether the product image features of the product to be identified are the image features of the product in the feature database, step S550 is performed, so that the customer is notified to adjust a position of the product on the platform. Then, the process returns to step S510, in which a platform image with the adjusted product on the platform is captured. In step S540, if there are multiple products being identified and at least one of the identified products cannot be determined to be one of the products in the feature database, step S550 is then preformed. Please also read [0036-0037 and 0039]).
CHEN fails to explicitly teach storing information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured; wherein the recognition process includes: detecting the object in the image and identifying the object as one of a plurality of certain articles, and recognizing, on a basis of the image, an imaging direction at a time when the object is the article identified, and when estimating, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
However, SATO explicitly teaches storing information (Fig. 8. Paragraph [0130]-SATO discloses the similar commodity database 362 includes a first commodity ID column 362a, a first direction column 362b, a second commodity ID column 362c, a second direction column 362d, a feature image column 362e, a feature direction column 362f, and a rotating axis vector column 362g) regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions (Fig. 8. Paragraph [0132]-SATO discloses the first direction column 362b is a column to store the direction to take an image of a reference image of one of the commodity between two similar candidate commodities, and the second direction column 362d is a column to store the direction to take an image of a reference image of the other commodity. In paragraph [0133]-SATO discloses the feature image column 362e is a column to store the link of the image of the reference images of one of the commodities between two similar candidate commodities, in which the similarity degree of the two candidate commodities is the smallest, and so a difference in feature is noticeable. In paragraph [0134]-SATO discloses the feature direction column 362f is a column to store the direction vector of the reference image of one of the commodities between two similar candidate commodities, in which the similarity degree of the two candidate commodities is the smallest, and so a difference in feature is noticeable. In paragraph [0135]-SATO discloses the rotating axis vector column 362g is a column to store the vector of a rotating axis when the target commodity is directed in the direction having the smallest similarity degree) in which respective surfaces of the articles are captured (Fig. 7. Paragraph [0121]-SATO discloses the reference images of the commodity in the six directions are recorded beforehand, whereby this commodity can be recognized easily when a commodity is held in any direction. In paragraph [0122]-SATO discloses FIGS. 7A to 7F illustrate reference images of a stuffed bear from the six directions. Please also read paragraph [0144-0147]);
wherein the recognition process includes:
detecting the object in the image (Fig. 9. Paragraph [0139]-SATO discloses at Step S12, the object detection unit 91 performs object recognition processing to the frame images acquired by the image acquisition unit 90 to try to recognize (detect) the entire or a part of the object as a commodity) and identifying the object as one of a plurality of certain articles (Fig. 9. Paragraph [0140]-SATO discloses at Step S13, the object detection unit 91 determines whether recognition of the entire or a part of the object as a commodity is successfully performed or not. When the object detection unit 91 determines that the object as a commodity is successfully recognized (Yes), the procedure proceeds to Step S14, and when it determines that the object as a commodity is not successfully recognized (No), the procedure returns to Step S11. In paragraph [0141]-SATO discloses at Step S14, the similarity degree calculation unit 92 reads a feature amount of the commodity from the entire or a part of the image of the commodity. In paragraph [0142]-SATO discloses at Step S15, based on whether there is a carried commodity having a similarity degree of the threshold T or more in the feature amount file 361 or not, the similarity degree determination unit 93 determines whether the commodity is specified uniquely or not. When the similarity degree determination unit 93 determines that the carried commodity having a similarity degree of the threshold T or more is uniquely specified, the procedure proceeds to Step S22, and when it is determined that there is a plurality of candidate commodities, the procedure proceeds to Step S16. When there are no candidate commodities, the procedure returns to Step S11. In paragraph [0143]-SATO discloses the processing from Step S16 to Step S21 is a series of processing relating to navigation to rotate the commodity), and recognizing, on a basis of the image, an imaging direction at a time when the object is the article identified, and when estimating, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process (Fig. 9. Paragraph [0144]-SATO discloses at Step S16, the guidance unit 94 determines whether the similar commodity database 362 includes rotating direction information on the candidate commodities or not. That is, the guidance unit 94 searches for the combination of the candidate commodities having the highest and the second highest similarity degrees from the similar commodity database 362, and determines the presence of the rotating direction information on the candidate commodities based on the presence or not of the corresponding record. When the guidance unit 94 determines that the similar commodity database 362 includes rotating direction information on the candidate commodities (Yes), the procedure proceeds to Step S19, and when it is determined that the similar commodity database does not include rotating direction information (No), the procedure proceeds to Step S17. In paragraph [0145]-SATO discloses at Step S17, the guidance unit 94 compares reference images in the same direction for the two candidate commodities, and specifies the reference images between which a difference in feature becomes noticeable. Specifically, the guidance unit compares reference images in the six directions of the stuffed rabbit and the stuffed bear. Herein, these commodities have the smallest similarity degree in their front images, and so a difference in feature is noticeable), the post-processing includes a notification to the user that the imaging direction of the object be changed (Fig. 9. Paragraph [0146]-SATO discloses at Step S18, the guidance unit 94 specifies the imaging direction in which a difference in feature is noticeable, and records the same as well as the combination information on the two candidate commodities in the similar commodity database 362. Thereby, when similar candidate commodities are detected later, guidance can be displayed promptly without calculating their similarity degree. In paragraph [0147]-SATO discloses at Step S19, the guidance unit 94 calculates a rotating axis vector based on the cross product of the imaging direction vector in which a difference in feature is noticeable and the imaging direction vector of the reference images. Further the guidance unit 94 records the calculated rotating axis vector in the similar commodity database 362. In paragraph [0149]-SATO discloses at Step S21, the guidance unit 94 displays an arrow on the screen to navigate the rotation of the commodity that is held by the operator. When the processing at Step S21 ends, the procedure returns to Step S11. In paragraph [0150]-SATO discloses the processing from Step S22 to Step S23 is the processing when the commodity is specified uniquely).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHEN of a method for processing information, with the teachings of SATO of having storing information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured; wherein the recognition process includes: detecting the object in the image and identifying the object as one of a plurality of certain articles, and recognizing, on a basis of the image, an imaging direction at a time when the object is the article identified, and when estimating, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
Wherein CHEN’s method having storing information regarding a plurality of articles and direction information indicating how easily the plurality of articles is identified from each of imaging directions in which respective surfaces of the articles are captured; wherein the recognition process includes: detecting the object in the image and identifying the object as one of a plurality of certain articles, and recognizing, on a basis of the image, an imaging direction at a time when the object is the article identified, and when estimating, in the estimation process on the basis of the imaging direction and the direction information regarding the article, that an imaging direction of the object is a cause of failure of the recognition process, the post-processing includes a notification to the user that the imaging direction of the object be changed.
The motivation behind the modification would have been to obtain a method that reduces the theft rate, improves the flow of checkout and improves the detection, identification and tracking of items, since both CHEN and SATO concern image analysis and automated retail checkout systems/methods. Wherein CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while SATO’s systems and methods provide an automated self-checkout that improves the ability to narrow down and recognize commodities without interrupting the flow of sales. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and SATO et al. (US 20160180509 A1), Abstract and Paragraph [0014-0015 and 0216].
Claims 2 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over CHEN et al. (US 20190371134 A1), hereinafter referenced as CHEN in view of SATO (US 20160180509 A1), hereinafter referenced as SATO and in further view of KAMBARA et al. (US 20190172039 A1), hereinafter referenced as KAMBARA.
Regarding claim 2, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN in view of SATO fail to explicitly teach wherein changing the imaging conditions includes a changing an imaging range (Fig. 1. Paragraph [0009]-HERWIG discloses the computer 104 uses a camera 134 with a view of the post scan area, such as the conveyor belt 131, or the bagging area to capture an image of the item. The camera 134 may suitably be able to pan, tilt, and zoom under the control of the computer 104. The captured image is then used for comparison against a stored set of images of the item identified as having been entered into the transaction).
However, KAMBARA explicitly teaches wherein changing the imaging conditions includes a changing an imaging range (Fig. 1. Paragraph [0159]-KAMBARA discloses the number of books recognizing unit 351 may recognize the number of books which are picked up from the shelf and the number of books returned to the shelf by the moving object Mo through the shelf camera 311 or may recognize the number of books which are picked up from the shelf and the number of books returned to the shelf by the shopper through a combination of the ceiling camera 310 and the shelf camera 311. At this time, the shelf camera 311 may be a camera that can perform imaging with a wide range. Please also read paragraph [0162]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHEN in view of SATO of having an information processing system, with the teachings of KAMBARA of having wherein changing the imaging conditions includes a changing an imaging range.
Wherein CHEN’s system having wherein changing the imaging conditions includes a changing an imaging range.
The motivation behind the modification would have been to obtain a system that reduces the theft rate and improves the accuracy and efficiency of item detection, identification and tracking, since both CHEN and KAMBARA concern image analysis and automated retail checkout systems/methods. Wherein CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while KAMBARA’s systems and methods improves the ability to detect and identify items with a high degree of accuracy. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and KAMBARA et al. (US 20190172039 A1), Abstract and Paragraph [0055 and 0184].
The motivation behind the modification would have been to obtain a system that reduces the theft rate and improves the accuracy and efficiency of item detection, identification and tracking, since both CHEN and KAMBARA concern image analysis and automated retail checkout systems/methods. Wherein CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while KAMBARA’s systems and methods improves the ability to detect and identify items with a high degree of accuracy. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and KAMBARA et al. (US 20190172039 A1), Abstract and Paragraph [0055 and 0184].
Regarding claim 8, CHEN in view of SATO explicitly teach the information processing system according to claim 6, wherein, CHEN in view of SATO fail to explicitly teach the controller is further configured to enlarge, in the post-processing, an imaging range of the imager when the controller estimates, in the estimation process, that the image of the object included in the image is overlapping the edge of the image.
However, KAMBARA explicitly teaches the controller is further configured to enlarge, in the post-processing, an imaging range of the imager when the controller estimates, in the estimation process, that the image of the object included in the image is overlapping the edge of the image (Fig. 19. Paragraph [0162]-KAMBARA discloses the ceiling camera-based inter-moving object region transfer recognizing unit 355 may recognize the transfer by analyzing a motion of a person using the object recognition technique such as the deep learning, may recognize the hand in the moving object region at the time of transfer, or may recognize overlapping between the moving object regions (which may include the hand). The ceiling camera-based inter-moving object region transfer recognizing unit 355 may use the shelf camera 311 instead of the ceiling camera 310. At this time, the shelf camera 311 may be a camera that can perform imaging with a wide range. Please also read paragraph [0169-0175, 0209-0211, 0236-0241, and 0286-0295).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHEN in view of SATO of having an information processing system, with the teachings of KAMBARA of having the controller is further configured to enlarge, in the post-processing, an imaging range of the imager when the controller estimates, in the estimation process, that the image of the object included in the image is overlapping the edge of the image.
Wherein CHEN’s system having the controller is further configured to enlarge, in the post-processing, an imaging range of the imager when the controller estimates, in the estimation process, that the image of the object included in the image is overlapping the edge of the image.
The motivation behind the modification would have been to obtain a system that reduces the theft rate and improves the accuracy and efficiency of item detection, identification and tracking, since both CHEN and KAMBARA concern image analysis and automated retail checkout systems/methods. Wherein CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while KAMBARA’s systems and methods improves the ability to detect and identify items with a high degree of accuracy. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and KAMBARA et al. (US 20190172039 A1), Abstract and Paragraph [0055 and 0184].
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over CHEN et al. (US 20190371134 A1), hereinafter referenced as CHEN in view of SATO (US 20160180509 A1), hereinafter referenced as SATO and in further view of SHIRASAKI (US 20040249717 A1), hereinafter referenced as SHIRASAKI.
Regarding claim 10, CHEN in view of SATO explicitly teach the information processing system according to claim 1, CHEN further teaches further comprising: notify, in the post-processing, the user that arrangement of the object be changed when determining that the recognition process has failed (Fig. 5. Paragraph [0056]-CHEN discloses if the products within the viewing angle of the image capturing device 126 fail to show enough features of the products (e.g., the products are not placed correctly, or the products are stacked up on top of each other), the product identification device 120 can automatically detect such situation and display/project a prompt of “Please turn over or separate the products” through the monitor or the projector. The prompt may use any prompt content that can draw attentions (e.g., colors or texts) to remind the customer. Further in paragraph [0039]-CHEN discloses an abnormal checkout behavior determination technology used in the computer vision based self-checkout system includes an active determination for situations like the weight of the product not matching the identification result and/or operation errors caused by the customer; and messages that prompt the staff to actively provide assistant for those situations. Please also read paragraph [0056 and 0062-0063]).
CHEN in view of SATO fail to explicitly teach explicitly teach a sensor that measures a sum of weights of all objects included in the image, wherein the storage is further configured to stores weight information indicating a weight of each of the plurality of articles, and wherein the controller is further configured to: calculate, on a basis of the weight information stored in the storage, a calculated weight, the calculated weight being a sum of weights of all articles identified for all the objects detected;
determine whether the recognition process has been successfully completed in response to a result of comparing the calculated weight with the sum of the weights of all the objects measured by the sensors.
However, SHIRASAKI explicitly teaches a sensor (Fig. 2, #201 called a weight sensor. Paragraph [0093]) that measures a sum of weights of all objects included in the image (Fig. 4. Paragraph [0035]-SHIRASAKI discloses the shopping cart 102 comprises a weight sensor 201 on the bottom of the basket unit 200 to measure the weight of the articles put in the basket unit 200. The weight sensor 201 makes it possible to always monitor the total weight of articles put in the basket unit 200), wherein the storage (Fig. 5, #506 called article information storage section. Paragraph [0064]) is further configured to stores weight information indicating a weight of each of the plurality of articles (Fig. 6. Paragraph [0064]-SHIRASAKI discloses FIG. 6 is an explanatory diagram of a data configuration of the article information storage section 506 of the checkout apparatus 101. Pieces of information of article names, unit prices, weights, and the like are stored in accordance with bar code numbers. The pieces of information related to the unit prices are stored in units of groups (Group A, Group B, Group C, . . . ), respectively), and wherein the controller (Fig. 4, #401 called a CPU. Paragraph [0039]) is further configured to:
calculate, on a basis of the weight information stored in the storage, a calculated weight, the calculated weight being a sum of weights of all articles identified for all the objects detected (Fig. 4. Paragraph [0093]-SHIRASAKI discloses a total weight of articles the bar codes of which are read by the bar-code reader 304 is calculated (step S1802). The calculation of the total weight can be performed such that information related to a weight is extracted from the database shown in FIG. 6 to perform an arithmetic operation on the basis of the information. In paragraph [0094]-SHIRASAKI discloses the total weight of articles calculated in step S1802 is compared with the weight information acquired in step S1803, and it is decided whether the difference falls within a predetermined range or not (step S1804). Please also read paragraph [0060-0061]);
determine whether the recognition process has been successfully completed in response to a result of comparing the calculated weight with the sum of the weights of all the objects measured by the sensors (Fig. 4. Paragraph [0094]-SHIRASAKI discloses when the difference falls within the predetermined range (step S1804: Yes), it is determined that the checkout process is correctly performed, and the control flow shifts to step S910 shown in FIG. 9 without performing any process. In paragraph [0095]-SHIRASAKI discloses when the difference is larger than a value falling within the predetermined range (step S1804: No), the checkout apparatus waits for a predetermined period of time. It is decided whether the predetermined period of time has elapsed (step S1805). The predetermined period of time can be determined in consideration of time required when the article is put in the basket unit 200 of the shopping cart 102 after the checkout process (read operation for the bar code of the article) is performed. When the predetermined period of time has not elapsed (step S1805: No), the control flow returns to step S1803, and the weight information is acquired from the weight sensor 201 again);
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to combine the teachings of CHEN in view of SATO of having an information processing system, with the teachings of SHIRASAKI of having a sensor that measures a sum of weights of all objects included in the image, wherein the storage is further configured to stores weight information indicating a weight of each of the plurality of articles, and wherein the controller is further configured to: calculate, on a basis of the weight information stored in the storage, a calculated weight, the calculated weight being a sum of weights of all articles identified for all the objects detected; determine whether the recognition process has been successfully completed in response to a result of comparing the calculated weight with the sum of the weights of all the objects measured by the sensors.
Wherein CHEN’s system having a sensor that measures a sum of weights of all objects included in the image, wherein the storage is further configured to stores weight information indicating a weight of each of the plurality of articles, and wherein the controller is further configured to: calculate, on a basis of the weight information stored in the storage, a calculated weight, the calculated weight being a sum of weights of all articles identified for all the objects detected; determine whether the recognition process has been successfully completed in response to a result of comparing the calculated weight with the sum of the weights of all the objects measured by the sensors.
The motivation behind the modification would have been to obtain a system that improves the automated checkout process as well as the accuracy and efficiency of item detection, identification and tracking, since both CHEN and SHIRASAKI concern image analysis and automated checkouts. CHEN’s systems and methods provide an automated self-checkout with mobile payments that reduces the theft rate and the ability to identify and monitor items, while SHIRASAKI’s systems and methods improves the ability to efficiently perform a checkout process and obtain the cost of an article to be purchased. Please see CHEN et al. (US 20190371134 A1), Abstract and Paragraph [0050 and 0068] and SHIRASAKI et al. (US 20040249717 A1), Abstract and Paragraph [0016-0018].
Conclusion
Listed below are the prior arts made of record and not relied upon but are considered pertinent to applicant`s disclosure.
Takeno et al. (US 20170116491 A1)- In accordance with one embodiment, a commodity recognition apparatus detects, from a captured image, an object imaged in the captured image and extracts an appearance feature amount of the object from the image of the object; compares the extracted appearance feature amount with feature amount data of a dictionary file in which feature amount data indicating the surface information of a commodity is stored for each recognition target commodity to calculate a similarity degree indicating how similar the appearance feature amount is to the feature amount data for each recognition target commodity; recognizes whether or not the object is a commodity based on the calculated similarity degree; and specifies and notifies the reason in a case in which the object is not recognized as a commodity................... Please see Fig. 1-4. Abstract.
Herwig et al. (US 20090039164 A1)- Systems and techniques for automated checkout verification. Product identification information is received and used as an index to retrieve a set of images associated with the identified product. The images may provide multiple views of the product. As the product is presented for purchase, an image of the product is captured and compared with the set of retrieved images. If it is determined that the captured image does not match the set of retrieved images, a security alert is issued..................... Please see Fig. 1-2. Abstract.
Iizaka et al. (US 20170344853 A1)- In accordance with an embodiment, an image processing apparatus comprises a designation module, a registration module and a display control module. The designation module designates an object. The registration module registers information of the object designated by the designation module. The display control module enables a display section to display an operation section for instructing the registration module based on the information registered by the registration module.................... Please see Fig. 1-13. Abstract.
FUKUDA et al. (US 20150193759 A1)- An object recognition device includes an operation unit configured to receive a user input about an item, a storage unit that stores image data of the item, an imaging unit configured to acquire an image of the item and generate image data therefrom, and a control unit configured to compare the generated image data with the stored image data, and cause information about updating the stored image data to be presented to a user, based on a comparison result..................... Please see Fig. 1-10. Abstract.
KATSUMURA et al. (US 20160086148 A1)- A merchandise item registration apparatus includes: a left photoelectric sensor that senses an object in a first area located on one side of a recognition area for merchandise items in a merchandise item identification device; a right photoelectric sensor that senses the object in a second area located on the other side of the recognition area; and a camera that captures an image of the recognition area. A merchandise item is identified by sensing the object from the image of the recognition area captured by the camera, and a POS terminal is made to perform a merchandise item registration process or a provisional registration cancellation process in accordance with the temporal sequence of results of the sensing performed by the camera in the respective areas..................... Please see Fig. 1-3. Abstract.
SAWADA et al. (US 20190026583 A1)- According to one embodiment, an article recognition apparatus includes an image interface, a weight interface and a processor. The image interface acquires an image captured by photographing a predetermined place where a plurality of articles are disposed. The processor acquires a first image, acquires a second image after detecting a predetermined event, recognizes an article, based on an image of an article area of an article which is absent in the second image, among article areas extracted from the first image, acquires a registered weight of the recognized article from an article database, and outputs an error if total of the registered weights disagrees with a difference weight between a first weight which a weight scale measures at a time of photographing the first image, and a second weight which the weight scale measures at a time of photographing the second image..................... Please see Fig. 1-2. Abstract.
Srivastava et al. (US 20190065823 A1)- Methods, systems, and programs are presented for simultaneous recognition of objects within a detection space utilizing three-dimensional (3D) cameras configured for capturing 3D images of the detection space. One system includes the 3D cameras, calibrated based on a pattern in a surface of the detection space, a memory, and a processor. The processor combines data of the 3D images to obtain pixel data and removes, from the pixel data, background pixels of the detection space to obtain object pixel data associated with objects in the detection space. Further, the processor creates a geometric model of the object pixel data, the geometric model including surface information of the objects in the detection space, generates one or more cuts in the geometric model to separate objects and obtain respective object geometric models, and performs object recognition to identify each object in the detection space based on the respective object geometric models....................... Please see Fig. 1-2 and 8-9. Abstract.
Chaubard et al. (US 11481751 B1)- A retail store automated checkout system uses images, video, or depth data to recognize products being purchased to expedite the checkout process and improve accuracy. All of a store's products, including those not sold in packages, such as fruits and vegetables, are imaged from a series of different angles and in different lighting conditions, to produce a library of images for each product. This library is used in a checkout system that takes images, video, and depth sensor readings as the products pass through the checkout area and remove bottlenecks later in the checkout process. Recognition of product identifiers or attributes such as barcode, QR code or other symbols, as well as OCR of product names, as well as the size and material of the product, can be additional or supplemental devices for identifying products being purchased. In another embodiment, an existing checkout or self-checkout scanner is enhanced with image recognition of products to achieve the same effect.................... Please see Fig. 1-6. Abstract.
Any inquiry concerning this communication or earlier communications from the examiner
should be directed to Aaron Bonansinga whose telephone number is (703) 756-5380 The examiner can normally be reached on Monday-Friday, 9:00 a.m. - 6:00 p.m. ET.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
supervisor, Chineyere Wills-Burns can be reached by phone at (571) 272-9752. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AARON TIMOTHY BONANSINGA/Examiner, Art Unit 2673
/CHINEYERE WILLS-BURNS/Supervisory Patent Examiner, Art Unit 2673