DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgement of priority to National Stage application of PCT PCT/JP2021/037738, filed on October 12, 2021 under 35 USC 119(a)-(d) or (f).
Information Disclosure Statement
The information disclosure statement (“IDS”) filed on 03/29/2024 has been reviewed and the listed references have been considered.
Drawings
The drawings are objected to under 37 CFR 1.83(a) because they fail to show product name for production identification information 2 in Figure 7 as described in the specification. Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d). Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Status of Claims
Claims 1-8 are pending.
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claim 5 recites “at least any one of” then lists “a cap portion of a product,” “a sealing portion of a product package,” “a logo of a company or a product brand,” “a product display mark,” and “an indication of special notes.” Since “at least any one of” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-8 are rejected under 35 U.S.C. 101, based on abstract idea. The claims recite a system and method of determining which size a product is when a product contains multiple variants in sizes. With respect to independent system claim 1:
STEP 1: Do the claims fall within one of the statutory categories?
YES. Claim 1 is directed to a system i.e., a device or a machine.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea?
YES, the claims are directed toward a mental process (i.e., abstract idea).
The limitation “detecting a reference area within a product from an image by processing the image of the product; and determining a management unit of the product by using a ratio of a size of the reference area with respect to an image area of the product” as drafted, recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind of a person, i.e., concepts performed in the human mind (including observation, evaluation, judgement, opinion).
As such, a person could location an area within a product in an image and determine the size of the product by comparing the size of the located area and overall image size of the product or an object with a degree of error or lack thereof either mentally or using a pen and paper. The mere nominal recitation that the various steps are being executed by a processor (e.g., processing unit) does not take the limitations out of the mental process grouping. Thus, the claims recite a mental process.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
The only additional elements “a memory” and “a process” are recited at a high level of generality and merely equate to “apply it” or otherwise merely uses a generic computer as a tool to perform an abstract which are not indicative of integration into a practical application as per MPEP 2106.05(f). See also MPEP 2106.04(a)(2)(III) with respect to Mental Processes: “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer”. See also MPEP 2106.04(a)(2)(III)(C)(3) Using a computer as tool to perform a mental process and MPEP 2106.04(a)(2)(III)(D) as well as the case law cited therein.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
NO,
The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional step/element/limitation amounts to no more than an abstract idea performed on a computer. The additional elements are simply appending well-understood routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception (WURC) per MPEP 2106.05(d) and 2106.07(a)(III). Therefore, claim 1 is not patent eligible.
In addition, the elements of claims 7 and 8 are analyzed in the same manner as claim 1. Therefore independent claims 1, 11 and 16 are not patent eligible, either.
Similar analysis is made for the dependent claims 2-6, under their broadest reasonable interpretation are identified as: being either directed towards mere data gathering or an abstract idea, mental process and mathematical calculation, and not reciting additional elements that integrate the judicial exception into a practical application, and not reciting additional elements that amount to significantly more than the judicial exception.
For all of the above reasons, claims 1-8 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, claims 1-8 are not eligible subject matter under 35 U.S.C 101.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, and 6-8 are rejected under 35 U.S.C. 103 as being unpatentable over Tanii et al. (JP 6209717 B1 – Translation from Espacenet) in view of Kambara et al. (JP 2021117849 A – From IDS, translation from Espacenet).
Regarding claim 1, Tanii teaches “An image analysis system (Tanii paragraph [0025] "The narrowing down processing system 1 uses a management terminal 2 and an input terminal 3") comprising:
at least one memory configured to store instructions; and at least one processor configured to execute the instructions to perform operations (Tanii paragraph [0027] "The management terminal 2 and the input terminal 3 in the narrowing down processing system 1 are realized using a computer […] The computer has an arithmetic unit 70 such as a CPU that executes the arithmetic processing of a program, a storage device 71") comprising:
detecting a reference area within a product from an image by processing the image of the product (Tanii paragraph [0035] "The image information processing unit 22 performs a correction process for the captured image information received by the image information input reception processing unit 20, which places the captured image information in a correct orientation, and a face process for identifying an area (face area) from the captured image information where a comparison process (matching process) with the specimen information is to be performed"); and
(Tanii paragraph [0045] "In the face processing, it is preferable to perform a process of identifying attribute information of the product in the face area […] identifying the shape and aspect ratio of the product shown in the face area, it is also possible to identify the character string in the face area by performing OCR recognition processing on the image information of the face area, and to identify the color information (color information including hue, brightness, and saturation) of the image information in the face area and the proportion of its area").”
However, Tanii does not teach “determining a management unit”.
In an analogous field of endeavor Kambara teaches “determining a management unit (Kambara paragraph [0061] "The size comparison unit 254 compares the size of the object with the size of the missized product, and determines which range of sizes of the missized product the size of the object belongs to")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a system of identifying products displayed on shelf as taught by Tanii to include size comparison of products with similar features as taught by Kambara.
The suggestion/motivation for doing so would have been “there is a demand for a system that can automate the payment process for products and reduce the time required to pay for products when a customer purchases products displayed in a store. Therefore, the applicant has already filed a patent application (Patent Application No. 2016-080624) for a next-generation cash register system that, when a product is placed in a designated area such as a cash register, captures an image of the designated area with a camera, recognizes the candidate product object from the resulting captured image, identifies what kind of product the object (candidate product) is, and executes a series of processes up to making payment for the identified product" as noted by the Kambara disclosure is paragraph 3.
Therefore, it would have been obvious to combine the disclosure of Tanii with
the Kambara disclosure to obtain the invention as specified in claim 1 as there is a
reasonable expectation of success and/or because doing so merely combines prior art
elements according to known methods to yield predictable results.
Claim 7 recites a method with steps corresponding to the elements of system
recited in claim 1. Therefore, the recited steps of this claim are mapped to the
proposed combination in the same manner as the corresponding elements of system
claim 1. Additionally, the rationale and motivation to combine the Tanii and Kambara references, presented in rejection of claim 1 apply to this claim.
Claim 8 recites a computer readable medium including computer executable instructions corresponding to the elements of the system recited in claim 1. Therefore, the recited instructions of the computer readable medium of claim 8 are mapped to the proposed combination in the same manner as the corresponding elements of the system claim 1. Additionally, the rationale and motivation to combine Tanii and Kambara presented in rejection of claim 1, apply to this claim.
Regarding claim 2, the combination of Tanii and Kambara teaches “The image analysis system according to claim 1,wherein the operations further comprise: determining, by comparing image feature information acquired from the image with product external appearance information indicating an external appearance feature of each product, a product having the external appearance feature having a matching degree, which is equal to or more than a reference, to the image feature information (Kambara paragraph [0057] "Regarding the comparison of feature points, the similar feature point number identification unit 251 determines similar feature points for the feature points and respective local feature amounts of the read product information with respect to the feature points and respective local feature amounts of the object […] The similar feature point number specifying unit 251 provisionally determines the product candidate with the largest number of similar feature points, which is equal to or greater than a predetermined number, as the product candidate most similar to the product"), when a plurality of products being a same except for a management unit are determined as a result of comparison of the image feature information with the product external appearance information (Kambara paragraph [0060] "The missized product identification unit 253 identifies missized products that are different in size from the list of similar products sent to the similar feature point number identification unit 251"), detecting the reference area from the image (Tanii paragraph [0035] "The image information processing unit 22 performs a correction process for the captured image information received by the image information input reception processing unit 20, which places the captured image information in a correct orientation, and a face process for identifying an area (face area) from the captured image information where a comparison process (matching process) with the specimen information is to be performed"), and determining the management unit of the product (Kambara paragraph [0061] "The size comparison unit 254 compares the size of the object with the size of the missized product, and determines which range of sizes of the missized product the size of the object belongs to").”
The proposed combination as well as the motivation for combining Tanii and Kambara references presented in the rejection of claim 1, applies to claim 2. Finally the system recited in claim 2 is met by Tanii and Kambara.
Regarding claim 3, the combination of Tanii and Kambara teaches “The image analysis system according to claim 1, wherein the operations further comprise determining, by comparing image feature information acquired from the image with product external appearance information indicating an external appearance feature of each product, a product having the external appearance feature having a matching degree, which is equal to or more than a reference, to the image feature information (Kambara paragraph [0057] "Regarding the comparison of feature points, the similar feature point number identification unit 251 determines similar feature points for the feature points and respective local feature amounts of the read product information with respect to the feature points and respective local feature amounts of the object […] The similar feature point number specifying unit 251 provisionally determines the product candidate with the largest number of similar feature points, which is equal to or greater than a predetermined number, as the product candidate most similar to the product"), selecting the product external appearance information that should be compared with the image feature information from among the product external appearance information of each product, based on the determined management unit (Kambara paragraph [0060] "The missized product identification unit 253 identifies missized products that are different in size from the list of similar products sent to the similar feature point number identification unit 251").”
The proposed combination as well as the motivation for combining Tanii and Kambara references presented in the rejection of claim 1, applies to claim 3. Finally the system recited in claim 3 is met by Tanii and Kambara.
Regarding claim 4, the combination of Tanii and Kambara teaches “The image analysis system according to claim 1,wherein the reference area is defined for each product (Tanii paragraph [0045] "In the face processing, it is preferable to perform a process of identifying attribute information of the product in the face area").”
The proposed combination as well as the motivation for combining Tanii and Kambara references presented in the rejection of claim 1, applies to claim 4. Finally the system recited in claim 4 is met by Tanii and Kambara.
Regarding claim 6, the combination of Tanii and Kambara teaches “The image analysis system according to claim 1,wherein the operations further comprise determining the management unit (Kambara paragraph [0061] "The size comparison unit 254 compares the size of the object with the size of the missized product, and determines which range of sizes of the missized product the size of the object belongs to") by using a ratio of a size of the reference area with respect to the image area of the product (Tanii paragraph [0045] "In the face processing, it is preferable to perform a process of identifying attribute information of the product in the face area […] identifying the shape and aspect ratio of the product shown in the face area, it is also possible to identify the character string in the face area by performing OCR recognition processing on the image information of the face area, and to identify the color information (color information including hue, brightness, and saturation) of the image information in the face area and the proportion of its area") in a height direction (Tanii paragraph [0055] "Package shape includes the product shape (an attribute related to the product's external shape (shape), such as bottle, can, box, bag, bottle, etc.), actual dimensions (product height and width), and aspect ratio (the ratio of product height to width)").”
The proposed combination as well as the motivation for combining Tanii and Kambara references presented in the rejection of claim 1, applies to claim 6. Finally the system recited in claim 6 is met by Tanii and Kambara.
Claims 5 is rejected under 35 U.S.C. 103 as being unpatentable over Tanii and Kambara, in view Yalniz et al. (US 2016/0379080 A1).
Regarding claim 5, the combination of Tanii and Kambara teaches “The image analysis system according to claim 1,wherein the operations further comprise detecting, as the reference area, an area associated with at least any one of a cap portion of a product, a sealing portion of a product package, a logo of a company or a product brand (Tanii paragraph [0046] "in face processing, a logo (such as a manufacturer logo or brand logo) in the face area may be identified"), a product display mark, and an indication of special notes (Tanii paragraph [0045] "identifying the shape and aspect ratio of the product shown in the face area, it is also possible to identify the character string in the face area by performing OCR recognition processing on the image information of the face area and to identify the color information (color information including hue, brightness, and saturation) of the image information in the face area and the proportion of its area").”
However, the combination of Tanii and Kambara does not teach detecting “a cap portion of a product, a sealing portion of a product package”.
In an analogous field of endeavor Yalniz teaches “a cap portion of a product, a sealing portion of a product package (Yalniz paragraph [0030] "the bottle shape might vary sufficiently such that the feature point extraction might be able to accurately determine the type of object, while in other instances it may be necessary to look at ratios such as bottle height, cap height, label width, and other such measurements with respect to a dimension of the logo or other such scalar element")”.
It would have been obvious to a person having ordinary skill in the art before
effective filing date of the claimed invention of the instant application to combine a system of identifying products displayed on shelf as taught by the combination of Tanii and Kambara to include various features extracted from an object as taught by Yalniz.
The suggestion/motivation for doing so would have been “An image analysis and matching process typically looks for features of an object represented in an image and performs a matching process whereby an attempt is made to locate an object with features that are determined to sufficiently match those of the captured image. It can be difficult to obtain accurate results, however, for objects that have a relatively low number of features that enable those objects to be accurately identified. Further, there can be various objects that are relatively similar such that even if the type of object can be identified, it can be difficult to determine the correct version, size, or model" as noted by the Yalniz disclosure is paragraph 2.
Therefore, it would have been obvious to combine the disclosure of Tanii and Kambara with the Yalniz disclosure to obtain the invention as specified in claim 5 as there is a reasonable expectation of success and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
“Machine Learning approaches to do size based reasoning on Retail Shelf objects to classify product variants” by Srivastava et al. discloses a method of determining product size when a product contains multiple size variants by processing the image and then calculating the size of the product.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASPREET KAUR whose telephone number is (571)272-5534. The examiner can normally be reached Monday - Friday 7:30 am - 4:00 PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amandeep Saini can be reached at (571)272-3382. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JASPREET KAUR/Examiner, Art Unit 2662
/AMANDEEP SAINI/Supervisory Patent Examiner, Art Unit 2662