Prosecution Insights
Last updated: April 19, 2026
Application No. 18/217,282

OBJECT IDENTIFICATION BASED ON A PARTIAL VISUAL OBJECT IDENTIFIER CODE SCAN AT POINT OF SALE

Final Rejection §101§102§103
Filed
Jun 30, 2023
Examiner
TUTOR, AARON N
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toshiba Global Commerce Solutions, Inc.
OA Round
2 (Final)
32%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 32% of cases
32%
Career Allow Rate
52 granted / 162 resolved
-19.9% vs TC avg
Strong +34% interview lift
Without
With
+34.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
39 currently pending
Career history
201
Total Applications
across all art units

Statute-Specific Performance

§101
32.8%
-7.2% vs TC avg
§103
43.4%
+3.4% vs TC avg
§102
15.3%
-24.7% vs TC avg
§112
6.8%
-33.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 162 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION This action is in reply to the submission filed on 6/30/2023. Status of Claims Applicant’s amendments to claims 1-2, 4, 7, 11-12, 15, and 17-20 are acknowledged. Claims 1-20 are currently pending and have been examined. Response to Remarks Applicant's remarks filed 10/24/2025 have been fully considered and have been found not persuasive in full. Regarding pages 10 and 11 of remarks, Barkan’s paragraphs 52-54 demonstrate a method of identifying an object by using at least one of a partial barcode and/or character reading, as well as object recognition on an image of part of said object. According to Examiner’s broadest reasonable interpretation of the claims, this reference teaches said correlation of contextual image features and stored object identifiers (including barcodes) to identify an object without a full barcode decoding. For example, Barkan paragraph 52 reads, “trained object recognition model 310…may determine a second product type based on second image data…and at least one of…at least a portion of a second barcode…or…a second production identification probability of the second item as determined by the trained object recognition model”. This is seen as reading on the two-pronged approach to item identification as claimed. Information Disclosure Statement The information disclosure statement filed 9/12/2023 fails to comply with 37 CFR 1.98(a)(2), which requires a legible copy of each cited foreign patent document, as well as a translation or statement of relevancy; each non-patent literature publication or that portion which caused it to be listed; and all other information or that portion which caused it to be listed. It has been placed in the application file, but the information crossed through therein has not been considered. See 609.04(a) of MPEP. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: the claims fall under statutory categories of processes and/or machines. Step 2A Prong 1: the claims recite: analyze image data to detect a portion of a visual object identifier pattern including fewer than all characters of a corresponding code, obtain a partial identifier segment from the detected portion of the pattern and a contextual feature of the object from the image data, and determine the identifier by correlating the partial identifier and the contextual feature with a set of stored object identifiers. These limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers certain mental process, including an observation, evaluation and judgement. Step 2A Prong 2: Said judicial exception is not integrated into a practical application because the claims as a whole, looking at the additional elements: processor, memory, optical sensor, machine readable code, individually and in combination, merely use a computer or other machinery as a tool to perform the abstract idea (see MPEP 2106.05f.) The claims use these machines in their ordinary capacity for the purpose of applying the abstract idea(s). Therefore, these limitations are invoking computers or other machinery merely as a tool to perform an existing process, such that it amounts to no more than mere instructions to apply the exception. Then, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea, and the claim is directed to an abstract idea. Step 2B: Said claims recite additional elements as listed above, which are not sufficient to amount to significantly more than the judicial exception because, as mentioned in Step 2A Prong 2, they use computers or other machinery to perform an abstract idea in such a way that amounts to no more than mere instructions to apply the exception using computers or other machinery. Mere instructions to apply an exception using computers or other machinery cannot provide an inventive concept. Therefore, the claim is not patent eligible. Claim 2 recites receiving a portion of characters. Claims 3 and 16 recite identifying the object based on the set of object identifiers and the portion of characters that represents the object identifier. Claims 4 and 19 recite receiving image data of the object and identifying the object based on the identifiers, image, and character portion. It also recites a second optical sensor position toward the region associated with scanning object codes. The combination of the two optical sensors and their positions are not seen as significantly limiting the abstract idea, and are seen as using the technology in its ordinary capacity. Claim 5 recites computerized data transmission, which is apply it level technology. IT also recites a request to verify the object in the image. It also recites training an artificial intelligence circuit on image data to perform identification of object. The combination of the two sensors, machine learning, and processor/memory/network are not a specific solution to a technological problem. The inventive concept lies in identifying items using comparative analysis of reference databases and object image data of contextual features such as partial characters. The combination of the off-the-shelf components are seen as necessary to implement the idea in a computerized embodiment, but do not offer any solution to the problem being solved, hence the “apply it” conclusion. Claims 6 and 20 recite identifying the object based on the character portion, object identifiers, and a weight. They also recite a load sensor and data transmission. The combination of the processor/memory, 2 image sensors, load sensor, and machine learning is seen as “apply it” level of technology. Claim 7 recites determining the weight measured corresponds to a range of potential weights for the object. Claim 8 recites determining the identifiers that correspond to the character portions. Claim 9 recites determining the identifiers have the same character positions. Claim 10 recites determining the identifiers that have the same character order as the character portion. Claims 11-12 and 17-18 recite a presence sensitive display to display a request to select one of the candidate objects, receiving a touch gesture. The combination of the additional elements are seen as “apply it” level. Claim 13 recites characters at a first and last position. Claim 14 recites the first and last characters are unscanned, undetected or undecoded. For these reasons, the claims are not subject matter eligible. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that forms the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-5, 8, 9, 15, 16 and 19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Barkan (US 2021/0264215 A1). Claims 1 and 15. Barkan teaches a method, comprising: by a point of sale (POS) system having a processing circuitry operably coupled to first optical sensor (para. 5 processor; para. 17 of Barkan showing POS with optical sensor) configured to capture image data of objects disposed within a checkout region without requiring alignment of a machine readable code with the sensor, (Abstract, imager FOV over scanning area and items within FOV) the first optical sensor being operable to analyze the captured image data to detect at least a portion of a visual object identifier pattern including fewer than all characters of a corresponding machine readable code, (para. 22 of Barkan showing optic imager for barcodes; para. 54 showing portion of barcode and one or more character symbols) with each code representing one of a set of object identifiers with each identifier being specific to a certain object and represented by a series of characters, (para. 54 showing barcode with characters being identified) and associated with a contextual feature of that object that includes a representative object image, (para. 52 showing image recognition with comparative database analysis) with the first optical sensor being positioned on or about the POS system so that a field of view of the first optical sensor is directed towards a presentation region or the checkout region of the POS system, (para. 5 showing field of view of sensor in scanning area) obtaining, based on the captured image data, a partial identifier segment from the detected portion of the visual object identifier pattern and a contextual feature of the object determined from the image data: and (para. 52 showing partial barcode and partial object feature analysis) determining the corresponding object identifier by correlating the obtained partial identifier segment and the obtained contextual feature with the set of stored object identifiers so that the certain object can be identified even when the visual object identifier code is only partially captured and without requiring complete decoding of the machine readable code. (para. 46 showing identification from a portion of a barcode where the other portion is not scannable; para. 54 showing OCR for character recognition; para. 52 showing partial barcode and partial object feature analysis for object recognition) Claim 15 additionally: a memory containing instructions executable by processing circuity in the POS. (para. 59 showing processing circuitry and instructions) Claim 2. Barkan teaches the method of claim 1, wherein the obtaining step includes: receiving, by the processing circuitry of the POS system, from the first optical sensor, an indication that includes the portion of the series of characters that represents the corresponding object identifier. (para. 5 showing a portion of barcode and para. 54 showing barcode character/symbol recognition) Claims 3 and 16. Barkan teaches the method of claim 1, further comprising: identifying the certain object based on the set of object identifiers and the portion of the series of characters that represents the corresponding object identifier. (para. 5 showing a portion of barcode and para. 54 showing barcode character/symbol recognition) Claims 4 and 19. Barkan teaches the method of claim 1, further comprising: receiving, by a processing circuit of the POS system, from a second optical sensor of the POS system, image data that represents a captured image of the certain object, (para. 23 showing secondary imager taking pictures of items) with the second optical sensor being positioned on or about the POS system so that a field of view of the second optical sensor is directed towards the presentation region associated with scanning the visual object identifier codes disposed on the objects; and (para. 23 showing secondary imager positioned above POS station) identifying the certain object based on the set of object identifiers, the image data, and the portion of the series of characters that represents the corresponding object identifier. (para. 29 showing primary and secondary images transmitted; para. 36 showing object identification using image data and barcode identification; para. 39 showing character recognition) Claim 5. Barkan teaches the method of claim 4, wherein the identifying step further comprises: sending, by the POS system, to a network node over a network, (paragraphs 32 and 33 showing transmission of data through networking interface; paras. 42 and 57 showing server receiving data from imagers on POS) an indication that includes a request to verify that the certain object corresponds to the one of the candidate objects based on the image data of the certain object, (Para. 38 showing directions for identifying item based on image) wherein the network node includes an artificial intelligence circuit operable to perform object identification so as to determine that the certain object corresponds to one of the candidate objects, (para. 39 showing neural network identifier) with the artificial intelligence circuit being trained on image data associated with the sets of objects; (para. 39 showing training network with said image data representative of items) receiving, by the POS system, from the network node over the network, an indication that the certain object corresponds to the one of the candidate objects; and (para. 49 showing identification sent to POS) wherein the identifying step is further based on the received indication. (Para. 49 showing said identification based on trained object recognition model) Claim 8. Barkan teaches the method of claim 1, further comprising: determining those object identifiers of the set of object identifiers that correspond to the portion of the series of characters to obtain candidate object identifiers. (para. 54 showing identification of object through OCR) Claim 9. Barkan teaches the method of claim 8, wherein the object identifier determining step further comprises: determining those object identifiers of the set of object identifiers that have the same characters in the same positions as the portion of the series of characters. (para. 50 showing position detection of barcode/number on items) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 6-7 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Barkan in view of Ron (US 11,620,822). Claims 6 and 20. Barkan teaches the method of claim 1, further comprising: identifying the certain object based on the portion of the series of characters that represents the corresponding object identifier, the set of object identifiers, and the weight measurement. (para. 52 showing confirming weight of item alongside object identification, barcode identification; para. 54 showing character recognition) Barkan does not, but Ron teaches: receiving, by a processing circuit of the POS system, from a load sensor of the POS system, an indication that includes a weight measurement of the certain object. (Ron column 16, lines 20-27 showing weight sensor data used by system) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of data collection by load sensor in Ron, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for accurate data input. (Ron column 16, lines 20-27 showing sensor data used to identify item) Claim 7. Barkan as modified by Ron teaches the method of claim 6. Barkan does not, but Ron teaches wherein the identifying step further comprises: determining that the measured weight of the certain object corresponds to a range of potential weights associated with one of the set of candidate objects, (Ron column 16, lines 20-27 showing a weight range for an item, and matching measured weight to said range) wherein each candidate object has a certain range of potential weights. (Ron column 16, lines 20-27 showing a weight range for an item, and matching measured weight to said range) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of data collection by load sensor in Ron, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for accurate data input. (Ron column 16, lines 20-27 showing sensor data used to identify item) Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Barkan in view of Brunelli (US 6,305,606). Claim 10. Barkan teaches the method of claim 8. Barkan teaches matching one or more character symbols to an item. Barkan does not, but Brunelli teaches wherein the object identifier determining step further comprises: determining those object identifiers of the set of object identifiers that have the same characters in the same order as the portion of the series of characters. (Column 2, lines 41-61 showing matching of character position) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of character position matching in Brunelli, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for barcode identification (Brunelli column 2, lines 41-61 showing character position matching for item identification.) Claims 13 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Barkan in view of Bachelder (US 2019/0108379). Claim 13. Barkan teaches the method of claim 1. Barkan does not, but Bachelder teaches: wherein the portion of the series of characters includes characters at a first position and a last position of the series of characters, (paragraphs 142 and 143 showing first and last character) with one or more positions between the first and last positions of the series of characters corresponding to unscanned, undetected or undecoded characters of the series of characters. (para. 143 showing undecoded characters within the string portion) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of character decoding in Bachelder, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for improved character recognition. (Bachelder para. 142 demonstrating techniques for character recognition.) Claim 14. Barkan teaches the method of claim 1. Barkan does not, but Bachelder teaches wherein one or more positions that start at the first position or end at the last position of the series of characters corresponds to unscanned, undetected or undecoded characters of the series of characters. (paragraphs 143 and 144 showing undecoded characters being the last character.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of character decoding in Bachelder, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for improved character recognition. (Bachelder para. 142 demonstrating techniques for character recognition.) Claims 11, 12, 17 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Barkan in view of Forutanpour (US 2022/0114854). Claims 11 and 17. Barkan teaches the method of claim 1, further comprising: determining those object identifiers of the set of object identifiers that correspond to the portion of the series of characters to obtain a set of candidate object identifiers; (para. 54 showing OCR recognition of characters on item for item identification) obtaining those objects that are specific to the candidate object identifiers to obtain a set of candidate objects; (para. 54 showing determining product type) identifying the certain object as the selected object. (para. 54 showing determining product type) Barkan does not, but Forutanpour teaches: outputting, for display on a presence sensitive display of the POS system, (para. 153 showing presence detecting display on POS) a visual representation associated with a request to select one of the set of candidate objects; (paragraphs 123 and 124 showing selection of objects on display) receiving, from the presence sensitive display, an indication of a touch gesture (para. 64 touchscreen) detected at or about the visual representation associated with one of the set of candidate objects; (paragraphs 123 and 124 showing selection of objects on display) determining that the detected touch gesture corresponds to one of the set of candidate objects to obtain a selected object. (paragraphs 123 and 124 showing selection of objects on display) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of touch input in Forutanpour, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for accurate data input. (Forutanpour para. 153 showing present detection for touchscreen to reduce power consumption.) Claims 12 and 18. Barkan as modified by Forutanpour teaches the method of claim 11. Barkan teaches further comprising: confirming that the certain object is the selected object. Barkan does not, but Forutanpour teaches: outputting, for display on a presence sensitive display of the POS system, (para. 153 showing presence detecting display on POS) a visual representation associated with a request to verify that the certain object is the selected object; (paragraphs 123 and 124 showing selection of objects on display) receiving, from the presence sensitive display, an indication of a touch gesture (para. 64 touchscreen) detected at or about the visual representation associated with the verification request; (paragraphs 123 and 124 showing selection of objects on display) determining that the detected touch gesture corresponds to the visual representation associated with the verification request. (paragraphs 123 and 124 showing selection of objects on display) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the system of object identification by weight in Barkan, with the known technique of touch input in Forutanpour, because applying the known technique would have yielded predictable results and resulted in an improved system by allowing for accurate data input. (Forutanpour para. 153 showing present detection for touchscreen to reduce power consumption.) Claim 18 additionally: determining those object identifiers of the set of object identifiers that correspond to the portion of the series of characters to obtain candidate object identifiers; (Barkan para. 54 showing OCR recognition of characters on item for item identification) obtaining those objects that are specific to the candidate object identifiers to obtain the candidate objects; (Barkan para. 54 showing determining product type) identifying the certain object as the one of the candidate objects. (Barkan para. 54 showing determining product type) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, this action is made final. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Aaron Tutor, whose telephone number is 571-272-3662. The examiner can normally be reached Monday through Friday, 9 AM to 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd Obeid, can be reached at 571-270-3324. The fax number for the organization where this application or proceeding is assigned is 571-273-5266. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AARON N TUTOR/Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Jun 30, 2023
Application Filed
Jul 23, 2025
Non-Final Rejection — §101, §102, §103
Oct 22, 2025
Applicant Interview (Telephonic)
Oct 22, 2025
Examiner Interview Summary
Oct 24, 2025
Response Filed
Dec 08, 2025
Final Rejection — §101, §102, §103
Mar 17, 2026
Applicant Interview (Telephonic)
Mar 18, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602929
SYSTEM, METHOD AND APPARATUS FOR DETECTING ARTICLE STORE OR RETRIEVE OPERATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12586036
PAY STATEMENT SETUP
2y 5m to grant Granted Mar 24, 2026
Patent 12567024
RFID BASED SEQUENCING SYSTEM AND METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12567048
HARDWARE SYSTEM FOR IDENTIFYING GRAB-AND-GO TRANSACTIONS IN A CASHIERLESS STORE
2y 5m to grant Granted Mar 03, 2026
Patent 12567025
SYSTEM AND METHOD FOR PROACTIVE AGGREGATION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
32%
Grant Probability
67%
With Interview (+34.5%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 162 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month