DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Examiner Comments
Claims 1-20 were presented for examination. Applicant filed an amendment on 10/13/2025 with remarks/amendment. After careful consideration of applicant’s arguments, new ground of rejections has been established in the instant application as set forth in detail below. Applicant's arguments with respect to claims have been considered but are moot in view of the new ground(s) of rejection.
Applicant agreed to discuss “Requirement For Information under 37 C.F.R. 1.105” and possible amendment to the claims for further prosecution of the application and scheduled telephone interview on October 28, 2025. No interview was held as scheduled.
Double Patenting
The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a non-statutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement.
Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-20 are rejected on the ground of non-statutory obviousness-type double patenting as being unpatentable over claims 1-18 of co-pending Application No. 18/516,869. Although the conflicting claims are not identical, they are not patentably distinct from each other because '869 application teach the similar limitations as recited in the instant application as follows:
“a data capture device comprising an imaging assembly configured to capture images over one or more fields of view; in response to the data capture device being unable to identify a decodable indicia on the object, receive image data associated with the images captured by the imaging assembly, and identify one or more aspects of the object, and generate object candidate data corresponding to the object from the identification; and generate object identifier data for each of the one or more object selections, and transmit the object identifier data to the register log application (claims 1, 6, 11 and 18); and ”an electronic weight scale connected to the data capture device configured to detect, via a sensor of the electronic weight scale, a change in weight of a product presentation region, in response to detecting the change in weight, transmit a capture signal to the imaging assembly, wherein the product presentation region is within the one or more fields of view” (claims 3, 8, 13 and 18).
The co-pending application ‘869 does not recite limitations of:
“an object prediction application deployed on either (i) the one or more memories of the data capture device or (ii) a host device and a selection application executing on the host device communicatively connected to the data capture device, the selection application configured to: receive the object candidate data, present a selection user interface via an interactive display of the host device, display, via the selection user interface, the object candidate data, receive one or more object selections of the object candidate data from a user interacting with the interactive display” as recited in the instant application.
This is a provisional double patenting rejection since the conflicting claims have not yet been patented.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea), and if so, it must additionally be determined whether the claim is a patent-eligible application of the exception. If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself. Examples of abstract ideas include fundamental economic practices; certain methods of organizing human activities; and mathematical relationships/formulas. Alice Corporation Pty. Ltd. v. CLS Bank International, et al., 573 U.S. ____ (2014).
In the instant case, claims 1-20 are directed to apparatus and method for receive one or more object selections of the object candidate data from a user interacting with the interactive display, generate object identifier data for each of the one or more object selections, and transmit the object identifier data to the register log application. The claims 1-20 are analyzed to see if claims are statutory category of invention, recites judicial exception and the claims are further analyzed to see if the claims are integrated into practical application if the judicial exception is recited and the claims provides an inventive as per 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) and October 2019 Update: Subject Matter Eligibility as set forth below:
Analysis:
Step 1: Statutory Category? This part of the eligibility analysis evaluates whether the claim falls within any statutory category. MPEP 106.03.
Claim 1 is directed to a system comprising plurality of device including processor and memory for identifying an object. The claimed system is therefore directed to a statutory category, i.e., a machine (a combination of device) (Step 1: YES).
Claim 6 is directed to an data capture device comprising one or more devices. The claimed apparatus is therefore directed to a statutory category, i.e., a machine (a combination of device) (Step 1: YES).
Claim 11 is directed to a non-transitory computer readable storage media, which is a manufacture. The claim, thus a statutory category of invention (Step 1: YES).
Claim 16 is directed to a process; i.e., a series of a computer-implemented method steps or acts, for identifying an object. A process is one of the statutory categories of invention (Step 1: YES).
Step 2A - Prong 1: Judicial Exception Recited? This part of the eligibility analysis evaluates whether the claim recites a judicial exception. As explained in MPEP 2106.04(II) and the October 2019 Update, a claim “recites” a judicial exception when the judicial exception is “set forth” or “described” in the claim. There are no nature- based product limitations in this claim, and thus the markedly different characteristics analysis is not performed. However, the claim still must be reviewed to determine if it recites any other type of judicial exception.
Claims 1, 6, 11 and 16 are similar and they are then analyzed to determine whether it is directed to a judicial exception. The claim recite plurality of steps of ”receive the object candidate data, present a selection user interface, display the object candidate data, receive object selections of the object candidate data, generate object identifier data for each object selections, and transmit the object identifier data to the register log application.”
The limitations as listed above is , as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind similar to Mortgage Grader, Inc., v. First Choice Loan Servcs with computer implemented method and system for anonymous shopping loan packages but for the recitation of generic computer components. That is, other than reciting “memory and a processor,” nothing in the claim element precludes the step from practically being performed in the mind and thus fall within the “mental processes” grouping of abstract idea set forth in the 2019 PEG. 2019 PEG Section I, 84 Fed. Reg. at 52. For example, but for the “a processor communicatively coupled to said memory device,” language, “”receive .., present …, display …, receive …, generate …, and transmit ….” in the context of this claim encompasses the user manually performing the recited steps.” The recitation of a processor in this claim does not negate the mental nature of these limitations because the claim here merely uses the processor as a tool to perform the otherwise mental processes. See October Update at Section I(C)(ii). Thus, the above limitations of recite concepts that fall into the “mental process” grouping of abstract ideas. If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” grouping of abstract ideas (YES).
Step 2A - Prong 2: Integrated into a Practical Application? This part of the eligibility analysis evaluates whether the claim as a whole integrates the recited judicial exception into a practical application of the exception. This evaluation is performed by (a) identifying whether there are any additional elements recited in the claim beyond the judicial exception, and (b) evaluating those additional elements individually and in combination to determine whether the claim as a whole integrates the exception into a practical application. 2019 PEG Section III (A) (2), 84 Fed. Reg. at 54-55.
Besides the abstract idea as described in Prong 1, the claim recites the additional elements of performing “an object prediction application deployed on either (i) the one or more memories of the data capture device or (ii) a host device, the object prediction application being configured to: in response to the data capture device being unable to identify a decodable indicia on the object, receive image data associated with the images captured by the imaging assembly, and identify one or more aspects of the object, and generate object candidate data corresponding to the object from the identification.”
The order combination elements of the claim is integrated into a practical application by providing a technical solution of addressing limitation of conventional operation by allowing host device to read output of object identifying applications without having to specifically reading the output when the object does not have a bar code, subsequently processing the data transmitted to the host device by the data capture device allowing fast and accurate identification of the object without an easily identifiable decodable indicia (see paragraph [0041]). Therefore, the claims are eligible because they do not recite a judicial exception as per 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) and October 2019 Update of Subject Matter Eligibility (YES).
Step 2B: Claim provides an Inventive concept? – Not Applicable.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 1, 2, 5-7, 10-12, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Wilfred et al. P. Pub No. 2020/0193281 (reference A in attached PTO-892) in view Slaughter et al., U.S. Pub No. 2023/0095037 (reference B in attached PTO-892).
As per claim 1, Wilfred et al. teach a an object prediction application deployed on either (i) the one or more memories of the data capture device or (ii) a host device (see Fig. 1, Scanner (104), Memory, Image Processor (116), Server (120), Object (108): paragraph [0028-0032]), the object prediction application being configured to:
in response to the data capture device being unable to identify a decodable indicia on the object, receive image data associated with the images captured by the imaging assembly (see Fig. 2, paragraph [0004, 0018, 0051]; Fig. 8, Match Step 806 and Step 808: YES/NO where object such as drink bottle or package or drink bottle is scanned related physical features instead of its bar code/indicia alone due to problem of spoofing barcodes at POS), and
identify one or more aspects of the object, and generate object candidate data corresponding to the object from the identification (see and Fig. 5 and paragraph [0052-0053]; where physical and 2D and 3D image of the object scanned by Bioptic Scanner) and
Wilfred et al. do not teach a selection application executing on the host device communicatively connected to the data capture device, the selection application configured to: receive the object candidate data, present a selection user interface via an interactive display of the host device, wherein a register log application was initially displaying a register log user interface on the interactive display and the register log application is configured to receive object identifier data and process the object identifier data, display, via the selection user interface, the object candidate data, receive one or more object selections of the object candidate data from a user interacting with the interactive display, generate object identifier data for each of the one or more object selections, and transmit the object identifier data to the register log application.
Slaughter et al. teach a selection application executing on the host device communicatively connected to the data capture device (see Fig. 1, Camera (110), Computer Vision System (155: paragraph [0017-0019]), the selection application configured to: receive the object candidate data, present a selection user interface via an interactive display of the host device (see Fig. 6: Produce Item Lookup (605A), Bakery/Other Item Lookup (605A); paragraph [0048-0049]), wherein a register log application was initially displaying a register log user interface on the interactive display and the register log application is configured to receive object identifier data and process the object identifier data (see Fig. 7, Representative Image (705A, B, C and D), Select Item), display, via the selection user interface, the object candidate data receive one or more object selections of the object candidate data from a user interacting with the interactive display (see paragraph [0051]), generate object identifier data for each of the one or more object selections, and transmit the object identifier data to the register log application (see Fig. 9, Please select one of the following : 705 A,B, C, and D; You entered this item: paragraph [0051, 0057]).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow above features taught by Slaughter et al to Wilfred et al. because Slaughter et al teach including above features would enable to double check item identity provided by shopper at POS system scanner and Machine vision System can supplement a barcode scanner and correct the mistakes (see abstract, paragraph [0012-0014]).
2. As per claims 2, 7, 12 and 17, Wilfred et al. teach the claim 1 as described above.
Wilfred et al. do not teach wherein presenting the selection user interface causes the selection user interface to be displayed to a foreground of the interactive display and receiving the one or more object selections causes the selection user interface to be displayed to a background of the interactive display.
Slaughter et al. teach wherein presenting the selection user interface causes the selection user interface to be displayed to a foreground of the interactive display and receiving the one or more object selections causes the selection user interface to be displayed to a background of the interactive display (see Fig. 7: 710 A, B, C and D; and Fig. 9: Please select one of the following: 705 A,B, C, and D; You entered this item: paragraph [0051-0052, 0057]).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow wherein presenting the selection user interface causes the selection user interface to be displayed to a foreground of the interactive display and receiving the one or more object selections causes the selection user interface to be displayed to a background of the interactive display taught by Slaughter et al to Wilfred et al. because Slaughter et al teach including above features would enable to double check item identity provided by shopper at POS system scanner and Machine Vision System supplement a barcode scanner and correct the mistakes (see abstract, paragraph [0012-0014]).
As per claims 5, 10, 15 and 20, Wilfred et al. teach the claim 1 as described above.
Wilfred et al. do not teach wherein the object candidate data includes a determined object classification and displaying, via the selection user interface, the object candidate data includes displaying objects from the determined object classification.
Slaughter et al. teach wherein the object candidate data includes a determined object classification and displaying, via the selection user interface, the object candidate data includes displaying objects from the determined object classification (see Fig. 5, Step 515: paragraph [0051]: Fig. 7, Representative Image (710A to D)).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow wherein the object candidate data includes a determined object classification and displaying, via the selection user interface, the object candidate data includes displaying objects from the determined object classification taught by Slaughter et al to Wilfred et al. because Slaughter et al teach including above features would enable to double check item identity provided by shopper at POS system scanner and Machine vision System can supplement a barcode scanner and correct the mistakes (see abstract, paragraph [0012-0014]).
As per claim 6, Wilfred et al. teach a data capture device comprising: an imaging assembly configured to capture images over one or more fields of view; one or more processors connected to the imaging assembly; one or more memories communicatively coupled to the one or more processors; and computing instructions stored on the one or more memories (see Fig. 1, Scanner (104), Memory, Image Processor (116), Server (120), Object (108): paragraph [0028-0032]) that, when executed, cause the data capture device to: capture, via the imaging assembly, images of an object in one or more fields of view (see Fig. 1, Camera (); paragraph [0028-0029]) execute steps as described in the claim 1 above.
As per claim 11, Wilfred et al. teach tangible, non-transitory computer-readable medium storing instructions that, when executed by one or more processors of a host device (see Fig. 1, Scanner (104), Memory, Image Processor (116), Server (120), Object (108): paragraph [0028-0032]), cause the host device to implement steps as described in the claim 1 above.
As per claim 16, Wilfred et al. teach a computer-readable method comprising: capturing, via an imaging assembly, images of an object in one or more fields of view (see Fig. 1, Scanner (104), Memory, Image Processor (116), Server (120), Object (108): paragraph [0028-0032]) implement steps as described in the claim 1 above.
Claims 3, 8, 13 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Wilfred et al. P. Pub No. 2020/0193281 (reference A in attached PTO-892) in view Slaughter et al., U.S. Pub No. 2023/0095037 (reference B in attached PTO-892) further in view of Gururaja et al., U.S. Pub No. 2021/0374375 (reference C in attached PTO-892).
As per claims 3, 8, 13 and 18, Wilfred et al. teach claim 1 as described above.
Wilfred et al. further do not teach an electronic weight scale connected to the data capture device configured to: detect, via a sensor of the electronic weight scale, a change in weight of a product presentation region, in response to detecting the change in weight, transmit a capture signal to the imaging assembly, wherein the product presentation region is within the one or more fields of view.
Gururaja et al. teach an electronic weight scale connected to the data capture device (see Fig. 1: paragraph [0016-0018]): configured to: detect, via a sensor of the electronic weight scale, a change in weight of a product presentation region, in response to detecting the change in weight, transmit a capture signal to the imaging assembly, wherein the product presentation region is within the one or more fields of view (see Fig. 4, Barcode Reader (402): Weighing Scale (410), Motion Detector (411), Camera (408): paragraph [0024-0027, 0030]).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow an electronic weight scale connected to the data capture device configured to: detect, via a sensor of the electronic weight scale, a change in weight of a product presentation region, in response to detecting the change in weight, transmit a capture signal to the imaging assembly, wherein the product presentation region is within the one or more fields of view taught by Gururaja et al to Wilfred et al. because Gururaja et al teach including above features would enable to generate an alert indicating potential scan avoidance responsive to both measured unstable weight over the time frame and barcode read failure if product is not placed withing Field of View in weighing scale for threshold duration (see abstract, paragraph [0012-0014]).
Claim 4, 9, 14 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Wilfred et al. P. Pub No. 2020/0193281 (reference A in attached PTO-892) in view Slaughter et al., U.S. Pub No. 2023/0095037 (reference B in attached PTO-892) further in view of Shao et al., U.S. Patent No. 9,330,292 (reference D in attached PTO-892).
As per claims 4, 9, 14 and 19, Wilfred et al. teach claim 1 as described above
Wilfred et al. do not teach the object prediction application is deployed on the one or more memories of the data capture device, the data capture device is configured to: in response to the imaging assembly capturing the images, generate a timer that measures an amount of time that has elapsed since the images were captured, detect that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data, and in response to detecting that the timer reached the threshold amount of time, transmit a time-out signal to the selection application, and the selection application is further configured to:
in response receiving the time-out signal, display, via the selection user interface, one or more of: (i) a complete object listing or (ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, or (d) need to be reoriented.
Slaughter et al. teach object prediction application is deployed on the one or more memories of the data capture device, the data capture device is configured to: in response to the imaging assembly capturing the images, display, via the selection user interface, one or more of: (i) a complete object listing (see Fig. 5, Steps 505, 510 and 515: Yes: Fig. 7, paragraph [0050-0051]; Fig. 9: paragraph [0057-0058]; where Computer Vision predict/identifies potential items based on item placed on scanner).
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow object prediction application is deployed on the one or more memories of the data capture device, the data capture device is configured to: in response to the imaging assembly capturing the images, display, via the selection user interface, one or more of: (i) a complete object listing taught by Slaughter et al to Wilfred et al. because Slaughter et al teach including above features would enable to double check item identity nd supplement a barcode scanner and correct the mistakes (see abstract, paragraph [0012-0014]).
Gururaja et al. and Shao et al. teach generate a timer that measures an amount of time that has elapsed since the images were captured, detect that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data, and in response to detecting that the timer reached the threshold amount of time, transmit a time-out signal to the selection application, and the selection application is further configured to:
in response receiving the time-out signal (see, display, via the selection user interface, one or more of: (i) a complete object listing or(ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, or (d) need to be reoriented (Gururaj et al.: abstract, paragraph [0024-0030]); Shao et al.: abstract, column 7, lines 39-46)
Therefore, it would be obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to allow generate a timer that measures an amount of time that has elapsed since the images were captured, detect that the timer reached a threshold amount of time before the object prediction application has generated the object candidate data, and in response to detecting that the timer reached the threshold amount of time, transmit a time-out signal to the selection application, and the selection application is further configured to:
in response receiving the time-out signal (see, display, via the selection user interface, one or more of: (i) a complete object listing or(ii) a notification indicating that one or more objects placed in a field of view of the imaging assembly are one or more of (a) obstructed by an unknown object, (b) too far from the imaging assembly, (c) too close to the imaging assembly, or (d) need to be reoriented to Wilfred et al. because Gururaja et al teach including above features would enable to generate an alert indicating potential scan avoidance responsive to both measured unstable weight over the time frame and barcode read failure if product is not placed withing Field of View in weighing scale during the threshold duration (see abstract, paragraph [0012-0014]).
Response to Arguments
New ground of rejections has been established in the instant application after careful consideration of applicant’s arguments as described above. Applicant's arguments with respect to claims have been considered but are moot in view of the new ground(s) of rejection.
Examiner maintains the obviousness-type double patenting (ODP) with co-pending applications ‘869 since instant application’s claims creates an unjustified extension of the patent term although they are not "patentably distinct" from another. A provisional double patenting rejection has been issued as the instant application claims are an obvious variation of the co-pending application despite not being identical.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosures. The following are pertinent to current invention, though not relied upon:
Pan et al. (U.S. Pub No. 2018/0239989) teach type prediction for recognizing an object in an image (Assignee: Zebra Technologies Corporation).
Pang et al. (U.S. Pub No. 2020/0192608) teach improving accuracy of convolution neural network training image data set for loss Prevention application,
Barkan et al. (U.S. Pub No. 2021/0295078) teach multiple field of view (FOV) vision system for trained image (Assignee: Zebra Technologies Corporation).
Chen et al. (Aug 2023) teach object detection and instance segmentation in computer vision.
Chang et al. (CN 113377932) teach automated selection of objects in images.
constructing investment instrument portfolios and benchmark indexes with active leveraged call or put options overlay.
Rodriguez et al. (U.S. Pub No. 2021/0157998) teach arrangements for identifying objects.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIJENDRA K SHRESTHA whose telephone number is (571)270-1374. The examiner can normally be reached on 8:00AM-5:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abhishek Vyas can be reached on (571) 270-1836. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Respectfully submitted,
/BIJENDRA K SHRESTHA/Primary Examiner, Art Unit 3691 01/23/2026