DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDSs) submitted on May 22, 2024 and December 5, 2024 are in compliance with 37 CFR 1.97 and 1.98 and therefore have been considered by the examiner and placed in the file.
Applicant’s Response To Restriction Requirement
In response to the examiner’s restriction requirement, Applicant has elected, with traverse, the claims of species A, namely claims 3 and 17, along with generic claims 1, 2, 15 and 16, for prosecution on the merits. However, Applicant argues that the claims 5, 6, 12-17, 19, 24 and 25 should also be examined with the species A claims because claims 5, 6 and 19 are allegedly linking claims for Species C and D that are not directed to either species and claims 12-14, 24 and 25 are not directed to any of species A-D. The examiner finds this argument persuasive in part and therefore will examine claims 1-3, 5, 12-17, 24 and 25 on the merits. With regard to claims 6 and 19, those claims would be examined with the claims of species C or D for reasons set forth in the previous Office Action.
With respect to Applicant’s arguments that there would not be a serious search or examination burden on the examiner because species A allegedly is not mutually exclusive to species C and D, the examiner disagrees. Species are mutually exclusive if the claims of a species require a mutually exclusive characteristic that is not required by the claims of the other species. See MPEP 806.04(f). This is true in the instant case and there is nothing in the record indicating that the species as claimed are obvious variants of one another. In particular, species A requires decoding of a barcode image that is not required for Species B, C or D. Species B requires text identification that is not required by Species A, C or D. Species C requires that the identification device be independent of the camera module, which is not required by Species A, B or D. Species D requires that the key and the camera module be located on a portable electronic device, which is not required by any of the other species. Therefore, species A-D as claimed are mutually exclusive of one another.
Claim Interpretation
The claims in this application are given their broadest reasonable interpretation (BRI) using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The BRI of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification.
In the following, some of the terms in the claims have been given BRIs in light of the specification. These BRIs are used for purposes of searching for prior art and examining the claims, but cannot be incorporated into the claims. Should Applicant believe that different interpretations are appropriate, Applicant should point to the portions of the specification that clearly support a different interpretation.
The BRI of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
“camera module” in claims 1 and 15;
“processing module” in claim 15;
“conversion module” in claim 15;
“comparison module” in claim 15;
“storage module” in claim 15; and
“output module” in claim 15.
“mark confirmation module” in claim 25.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The present disclosure provides support for the structures of all of these claim terms. The BRI for the camera module, based on para. [0043] of the present specification, is a camera and equivalents. The BRI for the processing module, the conversion module and the comparison module, based on paras. [0043]-[0046] of the present disclosure, is that they are one or more processors comprising the processing module configured to perform the corresponding operations, and equivalents. The BRI for the storage module, based on paras. [0042]-[0043] of the present disclosure, is that it is a memory device, and equivalents. The BRI for the output module, based on para. [0042]-[0043] of the present disclosure, is that it is an output port of the system that is external or internal to the one or more processors, and equivalents. The BRI for the mark confirmation module, based on para. [0033] and Fig. 6B of the present disclosure, is that it is one or more processors comprising the processing module configured to perform the corresponding operations, and equivalents.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-3, 5, 12-17, 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over an article entitled “On Scanning Linear Barcodes From Out-of-Focus Blurred Images: A Spatial Domain Dynamic Template Matching Approach”, by Chen at al., published in June 2014 in “IEEE TRANSACTIONS ON IMAGE PROCESSING”, VOL. 23, NO. 6 (hereinafter referred to as “Chen”) in view of U.S. Pat. No. 11,328,147 B1 to Ahmed et al. (hereinafter referred to as “Ahmed”) and further in view of European Patent number EP2843510 B1 to Lee et al. (hereinafter referred to as “Lee”).
Regarding claim 1, Chen discloses a barcode image recognition method (Abstract discusses a barcode recognition system and method that can recognize blurry out-of-focus (OOF) barcode images), comprising:
enabling a camera module (as indicated above, the BRI for camera module is a camera; section IV C (3) of Chen discusses using real-world barcode images captured by mobile phone cameras to generate a “database three”, which means that the camera would necessarily be enabled in order to capture the barcode images for “database three”);
continuously capturing, by the camera module, at least one target object in advance after the camera module is enabled to obtain a plurality of default images with continuous capturing time (section IV C (3), the target objects are the barcodes in the default images that are captured in advance using phone cameras to generate database three: “[t]o reduce the difficulty of the test, we prepared additional 108 images captured by a HTC Desire device, an Android smart phone carrying an autofocus camera module. These 108 images were captured by pointing the phone at one single barcode when its images were continuously taken”);
identifying, by a first artificial neural network, a first target image of each target object in each default image (the BRI for this limitation is using an artificial neural network to identify the image of the barcode, QR code or text contained in the default image; in Chen, the block labeled “Barcode detection/localization” in Fig. 2 performs the identifying operation to identify the barcodes that are contained in the default images when generating database three; however, Chen does not explicitly disclose using a neural network for this purpose);
capturing each first target image from each default image (the object labeled “barcode image” in Fig. 2 corresponds to the captured first target images contained in the default images);
converting each first target image into a string (the BRI for the term “string”, based on para. [0024] of the present disclosure, is that it means the information represented by the barcode that is obtained by decoding the barcode; the barcode scanline extraction block of Fig. 2 of Chen converts the target images into decoded strings);
storing each first target image and the string corresponding to each first target image (in Chen, the extracted scanline strings corresponding to the reference templates are stored with the target images in the scanline template banks shown in Fig. 2; section II: “[i]n this system, a set of scanline template banks is created offline. Each bank of scanline templates contains templates corresponding to all possible barcode values according to a specific OOF blur level”);
triggering the camera module according to a confirmation signal to take a single capture of a predetermined mark to obtain a captured image (after database three has been created and is ready for use, a camera phone such as the examples discussed in Section IV C (3) is used to take single captures of images of barcodes that are then decoded and compared to the templates stored in the scanline template banks of Fig. 2 to perform template matching; Chen is silent as to what triggers the camera to capture the image; Chen also does not explicitly disclose capturing a predetermined mark in the captured image; the BRI for this limitation, based on paras. [0029]-[0037] of the present disclosure, is that some type of pointer is used to mark the second target images, such as a laser or infrared pointer or the user’s finger, to indicate that a marked image is to be captured and identified; Chen does not explicitly disclose using such a pointer or otherwise marking images);
identifying, by a second artificial neural network, a marked image of the predetermined mark in the captured image and a second target image of a selected target object of the at least one target object that overlaps with the predetermined mark (the BRI for this limitation, based on paras. [0032]-[0037] of the present disclosure, is that the predetermined mark (e.g., the finger tip) overlaps the selected second target image and that a second artificial neural network is used to identify the marked image; in Chen, the barcode detection/localization block shown in Figs. 2 and 7 identifies the barcodes in the barcode images of selected second target images that are to be template matched with the templates contained in the scanline template banks shown in Fig. 2; however, Chen does not explicitly disclose identifying marked images; Chen also does not explicitly disclose using a second artificial neural network to perform this identifying limitation);
comparing the second target image with each first target image to obtain a
final selected image, the final selected image being the first target image with the highest similarity to the second target image (Chen discloses performing template matching to attempt to match the barcode scanline strings of the second target barcode images with barcode scanline strings stored in the scanline template banks and selecting the final selected image that has the highest similarity to the second target image; section I: “[a] dynamic template matching scheme has been designed to match deformed barcode signal against reference waveforms for barcode value detection. More specifically, once the location of the barcode and blur level is detected, deformed barcode waveform is extracted from the blurred image and segmented. After normalization, these waveform segments are compared with pre-computed standard reference waveform segments at the estimated blur level through dynamic programming by inferencing a directed graphical model. Then the reference waveform most similar to the observed barcode waveform is found. After being verified, the found reference waveform’s corresponding barcode value is treated as the output of the proposed barcode scanning system”); and
reading the string corresponding to the final selected image from the stored string and outputting the string (section I: “[t]hen the reference waveform most similar to the observed barcode waveform is found. After being verified, the found reference waveform’s corresponding barcode value is treated as the output of the proposed barcode scanning system”).
As indicated above, Chen does not explicitly disclose using first and second artificial neural networks to identify the first target images and the second target images, respectively. However, it is well known in the art of image analysis to use artificial neural networks to identify objects in images. Ahmed, in the same field of endeavor, discloses using first and second classifiers that can be first and second artificial neural networks, respectively, to identify a region in an image that includes a barcode and to identify at least one character string corresponding to the identified barcode, respectively (Fig. 3A, steps 304 and 310; Col. 17, lines 58-63 and Col. 18, lines 11-28).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the system and method of Chen represented by Figs. 2 and/or 7 to use first and second neural networks as taught by Ahmed to, respectively, identify the first target images that are used to generate templates for the scanline template banks and to identify second target images that are being compared via template matching to the templates stored in the template banks. One of ordinary skill in the art would have been motivated to make the modification to improve the robustness of the image identification processes by taking advantage of the well-known benefits of using trained neural networks to perform such tasks. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (e.g., incorporating software into the system of Chen to implement first and second neural networks for performing the image identification operations).
As indicated above, Chen does not explicitly disclose triggering the camera module according to a confirmation signal to take a single capture of a predetermined mark to obtain a captured image. The BRI for this limitation, based on paras. [0035]-[0037] of the present disclosure, is that when the system detects that a marker such as a laser pointer or the human finger is identified as being in the image region overlapping the target image for a predetermined period of time, the confirmation signal is generated to trigger the camera to capture the image. This limitation is not explicitly disclosed in Chen. As indicated above, Chen is silent as to what causes the camera to be triggered to capture a target image.
Lee, in the same field of endeavor, discloses detecting a finger image in an image region that overlaps a target image of an object, such as a barcode or QR code, and generating a signal that triggers the camera module to capture the image when movement of the finger does not occur during a predetermined period of time (Col. 3, lines 32-34 disclose that the object can be a QR code, a barcode, a wine label, etc.; Col. 3, line 53- Col. 4, line 11 discusses the method of recognizing the object comprising “capturing an image using a camera module, detecting a finger image in predetermined partial region of the captured image and detecting at least one object in a region of predetermined size that is located adjacent to the finger image is detected in the captured image”; Col. 4, lines 20-22 discuss performing the recognizing method, which includes triggering the camera to capture the target image “if movement of the first finger image … does not occur for a predetermined time.”; in Lee, the signal that triggers the camera module to capture the image constitutes a confirmation signal in that it confirms that the image region where the finger overlaps for a predetermined period of time is the intended capture region).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the system and method of Chen to trigger the phone camera to capture second target images that are marked with a predetermined mark such as a finger placed in region that overlaps the image and remains there without moving for a predetermined period of time as taught by Lee. One of ordinary skill in the art would have been motivated to make the modification to ensure that only intended images are captured, thereby reducing errors and computational overhead associated with decoding and processing incorrectly identified images. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (e.g., incorporating software into the system of Chen that triggers the camera to perform image capture when the systems detects that a finger or cursor overlaps an image region for a predetermined period of time).
Regarding claim 2, under MPEP 2111.04, claim scope is not limited by claim language that suggests or makes optional but does not require limitations. In addition, when a claim limitation requires selection of an element from a list of alternatives, the prior art teaches the limitation if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009). The BRI for the claim limitation “the type is one of a barcode image and a string image” is that the claim requires only one of the alternatives. In Chen, the barcode detection/localization block shown in Figs. 2 and 7 identifies the type of each first target image as a barcode type.
Regarding claim 3, Chen discloses that the first target image is a barcode image and the step of converting each first target image to the string comprises: decoding the first target image into the string, wherein the string is information carried by the first target image (the barcode detection/localization block of Fig. 2 identifies the barcode images and the barcode scanline extraction block of Fig. 2 decodes the barcode images into scanline strings containing information carried by the target barcode images).
Regarding claim 5, Chen does not explicitly disclose receiving the confirmation signal that triggers the camera module to capture the image from an input component.
As indicated above in the rejection of claim 1, Lee discloses recognizing a barcode image by determining if the finger of a user overlaps a predetermined region of the barcode image for a predetermined period of time, and if so, triggers image capture by the camera module (Col. 3, lines 53 – Col. 4, line 22). In Lee, the method is performed by an electronic device that can be a wearable device (e.g., a wristwatch) that includes the camera module. The wearable device 500 processes the image captured by the camera module of the wearable device 500 and detects when the finger is in a predetermined region of the captured image. When it is determined that the finger is in the predetermined region for a predetermined period of time (Col. 4, lines 20-22), the aforementioned object recognition process is performed, which involves triggering the camera module to capture the image of the object pointed to by the finger and then processing the image to identify the object (e.g., the barcode, QR code, menu item, etc.). In this scenario of Lee, the wearable device is the input component and the camera module of the wearable device receives the signal from the wearable device triggering the camera module to capture the image and perform object recognition (Fig. 5 and Col. 12, lines 28-52). The received signal constitutes a confirmation signal because it confirms that the conditions are such (e.g., finger in predetermined region for a predetermined period of time) that the image capture can take place.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the system and method of Chen to receive a confirmation signal from an input device, such as the processor of the phone camera, to trigger the camera of the phone camera to single-capture second target images that are marked with a predetermined mark such as a finger placed in predetermined region that overlaps the image for a predetermined period of time as taught by Lee. One of ordinary skill in the art would have been motivated to make the modification to ensure that only intended images are captured, thereby reducing errors and computational overhead associated with decoding and processing incorrectly identified images. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (e.g., incorporating software into the system of Chen that triggers the camera to perform image capture when the systems detects that a finger or cursor overlaps an image region for a predetermined period of time).
Regarding claim 12, the BRI for the limitation “generating the confirmation signal according to the capturing time of the default images” is that the generating of the confirmation signal controls the timing at which the marked images are captured by the camera module. The BRI is based on paras. [0006] and [0030]-[0038] of the present disclosure. As indicated above, Chen does not explicitly disclose marking images or capturing marked images.
Lee discloses:
identifying the marked image of the predetermined mark in each default image to obtain a marked position of the marked image in each default image (in Lee, the user’s finger placed in the predetermined region of the captured image constitutes a predetermined mark and the electronic device 500 obtains the marked position by determining when/if the finger has been detected in the predetermined region of the captured image, Col. 12, lines 28-52 and Fig. 5); and
generating the confirmation signal according to the capturing time of the default images with the marked image in these default images and the marked position (Col. 3, lines 53 – Col. 4, line 22 and Col. 12, lines 28-52, Fig. 5; as indicated above in the rejection of claim 5, Lee discloses generating the confirmation signal according to the capturing time of the images by sending the confirmation signal to the camera module to trigger image capture when a determination is made that the finger has been in the predetermined region of the image for the predetermined period of time).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to modify the system and method of Chen to mark images in predetermined marked positions and to use a confirmation signal to control image capturing timing based on a determination that an image has been marked by a finger in a predetermine marked position of a captured image for a predetermined period of time as taught by Lee. One of ordinary skill in the art would have been motivated to make the modification to ensure that only intended images are captured, thereby reducing errors and computational overhead associated with decoding and processing incorrectly identified images. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (e.g., incorporating software into the system of Chen that triggers the camera to perform image capture when the systems detects that a finger or cursor overlaps an image region for a predetermined period of time).
With regard to claim 13, as indicated above, Chen does not explicitly disclose marking images. As also indicated above in the rejections of claims 1, 5 and 12, Lee discloses using a finger as an indicating device to point to the selected target object to form the predetermined mark overlapping with the selected target object. The rejections of claims 1, 5 and 12 apply mutatis mutandis to claim 13.
With regard to claim 14, the rejections of claims 1, 5, 12 and 13 apply mutatis mutandis to claim 14. As indicated above in the rejection of claim 1, Lee teaches that the predetermined mark (i.e., the finger) overlaps the target object (Col. 3, lines 55-56 refers to “detecting a finger image in predetermined partial region of the captured image”, which means the predetermined mark overlaps the target image).
Regarding claim 15, to the extent that claim 15 recites the same limitations that are recited in claims 1 and 5, the rejections of claim 1 and 5 apply mutatis mutandis to claim 15.
Regarding elements recited in claim 15 that are not explicitly discussed above in the rejections of claims 1 and 5, Chen discloses:
a conversion module, converting each first target image to a string (as indicated above, the BRI for conversion module is that it is one or more processors of the processor module configured to perform the conversion function, and equivalents; the barcode scanline extraction block of Fig. 2 of Chen converts the target images into decoded strings; section IV C (4) of Chen discusses processors and memory for performing the functions represented by the blocks of Figs. 2 and 7);
a comparison module, coupled to the first artificial neural network and the second artificial neural network, and comparing the second target image with each first target image to obtain a final selected image, the final selected image being the first target image with the highest similarity to the second target image (as indicated above, the BRI for comparison module is that it is one or more processors of the processor module configured to perform the comparison function, and equivalents; section IV C (4) of Chen discusses processors and memory for performing the functions represented by the template matching blocks shown in Figs. 2 and 7; Chen discloses performing template matching to attempt to match the barcode scanline strings of the second target barcode images with barcode scanline strings stored in the scanline template banks and selecting the final selected image that has the highest similarity second target image; section I: “[a] dynamic template matching scheme has been designed to match deformed barcode signal against reference waveforms for barcode value detection. More specifically, once the location of the barcode and blur level is detected, deformed barcode waveform is extracted from the blurred image and segmented. After normalization, these waveform segments are compared with pre-computed standard reference waveform segments at the estimated blur level through dynamic programming by inferencing a directed graphical model. Then the reference waveform most similar to the observed barcode waveform is found. After being verified, the found reference waveform’s corresponding barcode value is treated as the output of the proposed barcode scanning system”);
a storage module, coupled to the processing module, and storing each first target image and the string corresponding to each first target image (section IV C (4) of Chen discusses processors and memory for performing the functions represented by the scanline template banks of Figs. 2 and 7);
an output module, coupled to the comparison module and the storage module, and reading the string corresponding to the final selected image from the storage module (Fig. 2 of Chen shows the output module at the output of the verification block that outputs the final selected barcode image with the highest similarity; section I: “[a]fter being verified, the found reference waveform’s corresponding barcode value is treated as the output of the proposed barcode scanning system”); and
a display, coupled to the output module, and displaying the string read by the output module (Chen does not explicitly disclose displaying the string that is output from the system).
Lee discloses that the recognized object, which can be a barcode image, for example, is displayed (Col. 3, lines 9-17 and lines 32-35).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the present disclosure, to couple the output of the processor of Chen to a display to enable the output of Chen to be displayed on the display. One of ordinary skill in the art would have been motivated to make the modification to allow processing results to be viewed by a user. The modification could have been made by one of ordinary skill in the art before the effective filing date of the present disclosure with a reasonable expectation of success because making the modification merely involves combining prior art elements according to known methods to yield predictable results (e.g., connecting an output port of the processor to a suitable display).
Regarding claims 16 and 17, the rejections of claims 2 and 3 apply mutatis mutandis to claims 16 and 17, respectively.
With regard to claim 24, see the rejection of claims 12 and 13 above, which apply mutatis mutandis to claim 24.
With regard to claim 25, the rejection of claim 12 applies mutatis mutandis to claim 25. As indicated above, the BRI for the mark confirmation module is the one or more processors comprising the processing module configured to perform the corresponding functions, and equivalents. Lee discloses that “at least one processor” performs the operations discussed above in the rejection of claim 12 (Col. 8, lines 4-5).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Publ. Appl. No. 2022/0350997 A1 discloses a capture manager 120 represents one or more components configured to capture images of the content 108, including the images 112a, 112b, using a camera of the HMD I/O hardware 104. Capturing may be performed based on an explicit command from a user, or may be performed automatically in response to a detected event, such as a specific framing of the content 108. The capture manager 120 may be further configured to perform text detection and OCR when the content 108 includes written text. More generally, the capture manager 120 may be configured to use any content recognition or content item recognition technique(s) suitable for a type of content being recognized. For example, as referenced above, the capture manager 120 may be configured to use various machine learning or artificial intelligence techniques to capture, interpret, or otherwise utilize the content 108. For example, when the content 108 includes images, the capture manager 120 may be configured to recognize image elements as individual content items. For example, the capture manager 120 may perform image recognition using a suitable trained machine learning model (e.g., a convolutional neural network, or CNN).
U.S. Publ. Appl. No. 2019/0279017 A1 discloses a labeling module 207 that cooperates with a capture device 247 to receive a scan of a barcode or other identifier associated with the selected item at the scene. In some embodiments, the labeling module 207 decodes the barcode scan and converts it into text. The labeling module 207 accesses a database of product information in the data storage 243, compares the converted text against the database, and determines symbolic information associated with the selected item for labeling. For example, the symbolic information may include a Universal Product Code (UPC), Stock Keeping Unit (SKU) identifier, physical dimensions (e.g., width, height, etc.), price, product name, brand name, size, color, packaging version, and other metadata (e.g., product description). In the instance where the database search does not yield a match for the scanned barcode, the user can manually label the selected item.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL J SANTOS whose telephone number is (571)272-2867. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matt Bella can be reached at (571)272-7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL J. SANTOS/Examiner, Art Unit 2667
/MATTHEW C BELLA/Supervisory Patent Examiner, Art Unit 2667