DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments have been fully considered but are moot in view of the new grounds of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 11-14, 7, 16, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao et al. 1 (hereinafter “D1”) in view of Chinnaswamy2 (hereinafter “D2”) and further in view of D3.3
With regard to claim 1, D1 teach method of extracting a target text string, comprising:
at a controller of a data capture device, obtaining an image of an item having the target text string thereon (fig. 1: image capture device); at the controller, selecting a search area from the image (see fig 1, ¶ 19: text and non-text region); at the controller, processing the search area via a primary image classifier, to identify a candidate target string (see ¶ 19: identifying text subregion for classification by character recognition, particularly text regions corresponding to price text);
and displaying the validated candidate target string via an output device of the data capture device (¶¶ 19, 55: display).
D1 fail to explicitly teach validating the candidate string based on a validation criteria, however D2 teach the missing features (see D1¶ 39: validation/verification based on character count).
D1 relates to detecting text regions of a product label and detecting price text from the detected text regions. See ¶¶ 24, 26. D1 describes in ¶ 54 that the controller correctly identifies text region locations at a rate of between 93% and 96% at various imaging distances. Furthermore, the price text region is correctly detected for a minimum of 88% of samples. In ¶ 36, D1 describes price text classification and non-price text classification. In general, the classifier is configured to distinguish between text regions likely to contain price text, and text regions, although containing text, are not likely to contain price text. See ¶ 36. D1 describes in ¶ 51 the extraction of the price text sub-region. The subregion includes upper and lower boundary lines to define the price text sub-region, and the region includes the string “5.77” and characters “$” and “2”. See fig. 10B, where the price text region is “$25.77”. Here, the price text string is verified or validated based on the presence of the characters “$”, for example. Other verification or validation criteria methods may be used. For example, price text regions usually contain a small number of characters, around five characters in the example “$25.77”. Other text regions, such as the barcode number, contain much larger number of characters, for example 10 characters in the example “8855101630” in fig. 8. Knowledge of the number of characters can improve the classification and validation of price text string.
D2 teach validating the candidate string based on a validation criteria (see D1¶ 39: validation/verification based on character count).
One skilled in art before the effective filing date may have found it obvious to incorporate known teachings as taught by D2 into the configuration of D1. By incorporating the character count, validation of price text string can be improved in D1, as discussed above.
D1 and D2 fail to explicitly teach the target text string being indicative of a date and normalizing an alphanumeric format of the candidate target string, however D3 teaches the missing feature (see ¶¶ 39, 47: extracted text of date is replaced with a different date format).
One skilled in the art before the effective filing date would have found it obvious to combine the teachings to arrive at the claimed invention. In particular, it would have been obvious to incorporate known teachings of modifying the format of the text, such as a date format, as taught by D3 into the configuration of D1. The motivation for modifying or replacing the format of the text would have been to display the text in a familiar format for the user, such as the format of the date to avoid confusion or misinterpretation. For example, it would have been obvious to replace numerical date format with textual format of the date or to modify the arrangement of the day, month and year to align with user expectations.
With regard to claim 2, D1 teach method of claim 1, wherein selecting the search area includes processing the image via an initial classifier to identify an associated text string (see ¶ 19: initial classification of prior text region or non-price text region).
With regard to claim 3, D1 teach method of claim 2, wherein selection of the search area is based on a location of the associated text string (fig. 1: server, transmission to mobile device and vice vera).
With regard to claim 4, D1 teach method of claim 1, wherein the displaying includes transmitting a message to a server, the message containing the validated candidate target string (see fig. 1: transmission between server and user device);
With regard to claim 7, D1 teach method of claim 1, further comprising: detecting a machine-readable indicium from the image; decoding an item identifier from the machine-readable indicium; and displaying the item identifier with the validated candidate target string. (see ¶¶ 17, 29, 31: barcode reading).
With regard to claims 11-14, see discussion of claims 1-4, respectively.
With regard to claims 17, see discussion of claim 7.
With regard to claim 6, D3 teach method of claim 5, wherein normalizing the candidate target string includes: storing a repository containing (i) a set of candidate values corresponding to a target string component, and (ii) a corresponding normalized value for the target string component; identifying one of the candidate values in the candidate target string; and replacing the identified value with the normalized value from the repository (see¶¶ 39, 47: replacing extracted date format with corresponding alphabets stored in association in the database). The motivation for incorporating the feature is the same as stated above.
With regard to claims 16, see discussion of claims 6.
Claims 8-10, 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over D1, D2 and D3 and further in view of Hu et al.4 (hereinafter “D4).
With regard to claim 8, D1, D2 and D3 fail to teach wherein the validation criterion includes an expected range corresponding to a date component; and wherein validating the candidate target string includes determining whether a candidate component of the candidate target string falls within the range. However, D4 teach the missing features of wherein the validation criterion includes an expected range corresponding to a date component; and wherein validating the candidate target string includes determining whether a candidate component of the candidate target string falls within the range. (see ¶¶ 25, 34-35: OCR extracts date range for certification).
One skilled in the art before the effective filing date would have found it obvious to incorporate known teachings of D4 wherein extracted date is compared with data range to determine validity of item into the configuration of D1, D2 and D3. One skilled in the art before the effective filing date would have been motivated to verify the date to check whether it is outside of possible values, such as expiration date.
With regard to claims 9 and 10, which recite teach method of claim 8, wherein the date components include a year, a month, and a day and teach method of claim 9, wherein the range is defined by a current year and a predetermined number of years after the current year. (see ¶ 52: issue date and expiration date; implicit that components of date include year month and day).
With regard to claims 18-19, see discussion of claims 8-9, respectfully.
With regard to claim 20, see discussion of claim 10.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVINASH YENTRAPATI whose telephone number is (571)270-7982. The examiner can normally be reached on 8AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sumati Lefkowitz can be reached on (571) 272-3638. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AVINASH YENTRAPATI/Primary Examiner, Art Unit 2672
1 US Publication No. 2021/0142092.
2 US Publication No. 2013/0156288.
3 US Publication No. 2020/0250266.
4 US Publication No. 2023/0351407.