Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendment provided 10/03/2025 has been entered and considered. Claims 1, 3-20 have been amended.
Response to Amendment
Specification objections
In view of the amendment provided 12/03/2025, the specification objection of the non-final rejection 07/03/2025 is hereby withdrawn.
Claim objections
In view of the amendments provided 12/03/2025, the claim objections of the non-final rejection 07/03/2025 is hereby withdrawn.
Double patenting
In view of the second terminal disclaimer (10/28/2025) that replaces the first, the double patenting rejection is hereby withdrawn.
Response to Arguments
On pages 10-12 of the remarks (10/03/2025), applicant contends that Fukuda does not explicitly indicate that the product name is output by voice based on determining that the reliability of the recognition result of the recognized product is lower than the reference value.
With regard to amended Claim 1, [0074] is directed toward condition a, which as recognized by applicant is with regards to a state in which the similarity is higher than a reference value as opposed to being lower. As such the 102 rejection of the non-final rejection is hereby withdrawn as moot in view of the amendment (which adds a determination step). However, as outlined below, Fukuda shows the amended limitation in a different portion of its description.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-6, 8-9, 11-13, 15-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Fukuda et al.(US publication provided by applicant 20150193759 A1; hereinafter “Fukuda”) in further view of Thatcher (non-patent literature titled “Screen reader/2: access to OS/2 and the graphical user interface”; hereinafter “Thatcher”).
In re to claim 1, Fukuda teaches wherein: a registration apparatus (Checkout system; [0026] discloses a checkout system for item registration) comprising: at least one memory configured to store one or more instructions (HDD; [0036] discloses the usage of memory units to store files to be executed by the CPU, which comprises the merchandise recognizing device (this includes the HDD that stores some or all of the programs used (as per [0036])). Additionally, programs are understood to be the one or more instructions); and
at least one processor configured to execute the one or more instructions (CPU 161; [0036] discloses that the system’s CPUs utilizes the one or more instructions (correspondent to the claims) stored on the HDD to perform actions) to: acquire an image obtained by imaging a placement surface of a table ([0030], which discloses that the merchandise recognizing device includes a scanning unit used to collect image data, and thus acquire an image, as per [0031]. Additionally, [0041] discloses that the imaging unit is controlled by CPU 161), on which a product is placed ([0031] discloses that the imaging device captures image data from reading window (103) of Fig. 2. Further, as per Fig. 2, this imaging is done with respect to Fig. 2 (152), the load receiving surface (see [0029] for this distinction), understood to be the placement surface of a table (understood to be the counter table, Fig. 2 (151)));
recognize the product included in the image ([0032] discloses that the system’s merchandise recognizing device recognizes the item, understood to be a product, as one or more items previously registered. See also [0072], which discloses that the system recognizes an imaged item, as a registered item);
determine whether reliability of a recognition result ([0061] discloses a set of conditions set when comparing a given item in the image data to items determined to be registered items. Per [0064], the conditions of [0061] are disclosed to utilize thresholding, and thus is understood to disclose reliability determination of a recognition result (being the recognition of items)) of the recognized product is lower than a reference value ([0067] discloses, that to satisfy the conditions of [0061], the product (correspondent to the claims) must be less than a first threshold. Thus, being lower than a reference value (understood as the first threshold’s value)); and
output, based on determining that the reliability of the recognition result of the recognized product is lower than the reference value ([0067] discloses the determination of an object identification when said product is recognized to be below a first threshold. Further, per [0075] and Fig. 7, this results in the display of the recognized product’s name (as shown in Fig. 7) for the sake of confirmation by the user), a name of the recognized product (Fig. 7 and [0075] indicate that the recognized product (correspondent to the claims) has its name presented to the user).
Fukuda does not explicitly teach wherein: the user viewed screen is read out by voice
However, in a related field of endeavor, Thatcher teaches wherein: the user viewed screen is read out by voice (section 2 bullet point 1 indicates that the system for the screen reader may read out the entire view of the screen (see also section 2.1 para 5 lines 1-2, which further indicate the screen reader outputs voice outputs)).
Thatcher, like Fukuda, teaches a system that performs audio output for the user.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fukuda, to include read out text on its screen, as taught by Thatcher. The motivation for the proposed modification would have been to aid visually impaired users in understanding what is occurring on the screen of the system in use (as is a noted benefit in thatcher section 1.1 para. 1, which details the philosophy behind screen readers). Additionally, Fukuda performs audio feedback in relation to the systems actions, such as in the case of condition a for an object that exceeds the first threshold (per Fukuda [0074]), showing interest in providing a user audio feedback.
In re to claim 2 [dependent on claim 1], Fukuda teaches wherein: the at least one processor is further configured to execute the one or more instructions to: register the recognized product as a checkout target (([0025] discloses that the POS terminal (which comprises the checkout system of Fig. 1) registers items. Further, as per [0032], the items are recognized by the merchandize recognizing device of Fig. 1 (comprising the checkout system of Fig. 1). Per [0032], items are registered by the POS terminal as a sale (a recognized sale item being understood to disclose a checkout target)).
In re to claim 4 [dependent on claim 1], Fukuda, in view of Thatcher, teaches wherein: the at least one processor is further configured to execute the one or more instructions to: determine whether the recognized product is on a list of products (PLU file; Fukuda Fig. 3 provides an example of the PLU file, which contains a list of items) to be a voice processing target (Fukuda [0074] discloses that the system checks an item to determine it to be a registered item in relation to the PLU file. Additionally, it is understood that when this triggers an audio signal, as per Fukuda [0074] lines 1-5 and lines 18-22, it is understood to be determined as a voice processing target); and
output, based on determining the recognized product to be on the list, the name of the recognized product (Fukuda [0067] discloses the determination of an object identification when said product is recognized to be below a first threshold. Further, per Fukuda [0075] and Fukuda Fig. 7, this results in the display of the recognized product’s name (as shown in Fig. 7) for the sake of confirmation by the user) by voice (Thatcher section 2 bullet point 1 indicates that the system for the screen reader may read out the entire view of the screen (see also Thatcher section 2.1 para 5 lines 1-2, which further indicate the screen reader outputs voice outputs)).
The reasons for combination are the same as provided above.
In re to claim 5 [dependent on claim 4], Fukuda teaches wherein: the at least one processor is further configured to execute the one or more instructions to: receive, by user operation, selection of a target product ([0079] and Fig. 8, discloses use of an item selection screen for user input, thus disclosing selection of target product by a user operation) to be output by voice ([0082], discloses that the selected item may be used as the identified item. Further, [0074] lines 15-22 denotes identified items may have their name output via audio; and
generate the list including information of the target product ([0036] lines 8-10, discloses transmission of a file from a store computer to be stored on the HDD. This transmission is understood to disclose generation of a usable list by the checkout system using said store computer).
In re to claim 6 [dependent on claim 4], Fukuda, in view of Thatcher, teaches wherein: the at least one processor is further configured to execute the one or more instructions to: determine whether a product attribute of the recognized product satisfies a predetermined condition (Fukuda [0067] discloses, that to satisfy the conditions of Fukuda [0061], the product (correspondent to the claims) must be less than a first threshold. It is understood that the predetermined condition is condition c, with the product attribute being the value compared to the threshold); and
Output, based on determining that the product attribute of the recognized product satisfies the predetermined condition the name of the recognized product (Fukuda [0067] discloses the determination of an object identification when said product is recognized to be below a first threshold. Further, per Fukuda [0075] and Fukuda Fig. 7, this results in the display of the recognized product’s name (as shown in Fig. 7) for the sake of confirmation by the user) by voice (Thatcher section 2 bullet point 1 indicates that the system for the screen reader may read out the entire view of the screen (see also Thatcher section 2.1 para 5 lines 1-2, which further indicate the screen reader outputs voice outputs)).
The reasons for combination are the same as provided above.
As to claims 8, 9, 11, 12, and 13, they are the method executed by the system of claims 1, 2, 4, 5, and 6 (respectively). As such, they recite similar limitations to claims 1, 2, 4, 5, and 6 (respectively), and are rejected for the same reasons as provided above.
As to claims 15, 16, 18, 19, and 20, they are the method executed by the system of claims 1, 2, 4, 5, and 6 (respectively). As such, they recite similar limitations to claims 1, 2, 4, 5, and 6 (respectively), and are rejected for the same reasons as provided above.
Claim 3, 10 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Fukuda, in view of Thatcher, in further view of Nanbu et al. (JP Publication provided by applicant 2004127013 A; hereinafter “Nanbu”).
In re to claim 3 [dependent on claim 1], Fukuda does not explicitly teach wherein: the at least one processor is further configured to execute the one or more instructions to: display information for highlighting a placed position of the product on the table while the name of the product is being output by voice.
However, in a similar field of endeavor Nanbu teaches wherein: the at least one processor is further configured to execute the one or more instructions to: display, at least while the name of the recognized product is being output by voice ([0043] discloses that the system generates audio of the name of the product through use of a voice as an additional procedure of identification of a product. Thus, it discloses that the voice states a product name while products are being highlighted. See also that marker position is continuously updated, per [0045], and thus present throughout the identification of said object (identified and highlighted objects being understood as recognized products)), information highlighting a placed position ([0020] discloses the tracking of an items position (understood to be the product) on a screen) of the recognized product on the table (surface products are place on; Figs. 3-5 depict the imaging of items. Which includes a given item that is imaged while resting on a surface. See also [0042], which discloses the highlighting of identified items using markers on screen 121 (see also in Figs. 3-5 that 121 has items placed on it)).
Nanbu, like Fukuda, teaches a point of sales system that identifies products and outputs product information related to them.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fukuda, to include highlighted position data, as taught by Nanbu. The motivation for the proposed modification would have been to show the user the location of similar but different objects, so as to reduce the chance of user confusion in the event of different pricing. As is possible given that a plurality of the same kind of food may be allocated individual prices, as evidenced by Fukuda Fig. 3. With the chance of said confusion being reducible due to item unique labeling, as shown in Nanbu Figs. 3-5.
As to claim 10, it is the method executed by the system of claims 3. As such, it recites similar limitations to claim 3 and is rejected for the same reasons as provided above.
As to claim 17, it is the method executed by the system of claims 3. As such, it recites similar limitations to claim 3 and is rejected for the same reasons as provided above.
Claim 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Fukuda, in view Thatcher, in further view of Watanabe (JP applicant provided publication 2009058236 A; hereinafter “Watanabe”).
In re to claim 7 [dependent on claim 1], Fukuda teaches wherein: the output of the product by voice ([0074] lines 19-22, discloses the output of the identified product’s name using audio. Further, [0082] lines 6-9 discloses that the system can process a plurality of items. Thus, the system may output a product by voice), as well as the output, in response to recognition of the recognized product, a second product by voice ([0074] lines 19-22, discloses the output of the product’s name using audio. [0082] lines 6-9 discloses that the system can process a plurality of items. Thus, there is understood to be an additional product that can have audio identification, with this being the output of a second product by voice)
Fukuda does not explicitly teach wherein: to interrupt processing outputting a first notification by voice, and start processing of outputting the second notification by voice.
However, in a related field of endeavor Watanabe teaches wherein: to interrupt processing outputting a first notification by voice, and start processing of outputting the second notification by voice (Page 4 para. 6, discloses that the processing for a voice notification of a lower ranked notice (understood to be the first notification) is interrupted by a higher ranked notice (understood to be the second notification)).
Watanabe, like Fukuda, teaches a system that presents the user audio in relation to detections of the system.
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Fukuda, to implement audio interruption, as taught by Watanabe. Such a modification would have been obvious to try, as it is one of a predictable and ascertainable group of similar features, which are:
provide notifications in order of detection
fail to output the second notification as a result of announcing the first
interrupt the first notification with the second
provide the notifications concurrently
end all audio notifications
This group addresses the need to provide a user with impaired vision a notification of a detection in a sensor based system using audio. As such, given a finite number of potential solutions for the recognized need, one of ordinary skill in the art could have pursued the known potential solution.
As to claim 14, it is the method executed by the system of claims 7. As such, it recites similar limitations to claim 7 and is rejected for the same reasons as provided above.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN M COOMBER whose telephone number is (571)270-0950. The examiner can normally be reached Monday - Friday 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KEVIN M COOMBER/Examiner, Art Unit 2663
/GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698