Prosecution Insights
Last updated: April 19, 2026
Application No. 17/749,100

Frictionless Retail Stores and Cabinets

Non-Final OA §103
Filed
May 19, 2022
Examiner
SHAPIRO, JEFFREY ALAN
Art Unit
3619
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Swyft, Inc.
OA Round
3 (Non-Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
70%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
483 granted / 881 resolved
+2.8% vs TC avg
Strong +16% interview lift
Without
With
+15.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
47 currently pending
Career history
928
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
52.5%
+12.5% vs TC avg
§102
19.7%
-20.3% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 881 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/23/25 has been entered. Claim Rejections – 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 6-20, 22 and 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hay (WO 2016/004235 A2) in view of Zalewski et al (9,911,290 B1). Regarding Claim 1, Hay teaches a method for conducting a purchase transaction, as illustrated in figures 1-29 and as mentioned in paragraph 10, for example, comprising: sensing, by a first sensor, i.e., camera (1910), as mentioned at paragraph 194 and as illustrated in figure 19, removal or return of a first item from a first region, i.e., as mentioned at paragraph 194, first and second sentences, stating “[a]lternatively, a product image recognition software system (PIRS) may be used to detect the presence, placement or the removal of products on or off the trays” and “[s]uch PIRS will utilize a built-in camera, such as the camera 1910 illustrated in FIG. 19, which may be affixed to the bottom of each tray, the bottom of the AMP unit 1100, or in such other suitable place so as to be able to take images of the entirety of the items and/or compartments of a tray”; sensing, by a computer vision sensor (1910), removal or return of a second item from the first region, noting that Hay’s system is intended to repetitively sense and record placement or removal of any items from the trays as mentioned in paragraphs 194 and 195, for example; verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to a single event of removal or return of an item by a consumer, as mentioned in paragraphs 25, 128, 148, 194 and 195, last two sentences stating “[t]he system will verify that the scanned item matches the product's assignment which is associated with the compartment that the item was returned to and issue the credit to the customer” and “[i]f the customer wishes to return a product or item after the transaction is closed, the return may require customer service assistance”; and applying a purchase price of the item against an account of the consumer for purchase of the item, thereby completing the purchase transaction, as mentioned at paragraphs 89, 127 and 128, which states as follows. [Para 89] With reference now to FIG. 2, another ASO 1 0 is shown having a check-in station 1 2 , as described above, and a plurality of bays 22 within two modular units. In this case, however, it will be seen that the modular bays 22 are of a different configuration and size than that illustrated in FIG. 1 . Moreover, one of the bays 22 includes items related to a drink dispenser 34. The drink dispenser 34 typically comprises a coffeemaker, but can also provide hot water for making tea, cold water for making cold flavored beverages, etc. The coffee K-cups.sup.® , teabags, flavoring packets, cups, etc. would be accessed through the locked door 24 of the bay 22 , and after retrieving and purchasing these items, the customer could make the desired beverage using the beverage device 34. The beverage device 34 may or may not be behind a closed and locked door 24. Other type of hot or cold single cu p beverage dispensers 34 may be provided. Such dispensers are controlled by the ASO's computer system and dispense selected beverages only to qualified customers. In order to activate the beverage dispenser 34 the customer must have provided customer identification and payment information which was verified, such as by entering such information into an access unit 26 at the bay in which the beverage dispenser 34 is disposed, or into such an access unit which is electronically connected to the beverage dispensing device 34. At that time, the system of the ASO changes the machine mode from standby to active and the machine processes the customer's drink selections. The beverage dispenser device 34 reports the selection to the system that adds the sale to the customer's shopping cart. The beverage dispenser 34 may process one drink at a time, and then return to standby mode, until the next customer becomes verified and approved to use the dispenser 34. It is also contemplated by the present invention that the customer could be charged for a cup that is removed and /or a component which is required to make a beverage, such as a K-cup.sup.®, tea bag, flavoring packet, etc. Upon removal of one of these items from the bay tray 28, the beverage dispenser device 34 could be activated to make and/or dispense a beverage corresponding to the beverage cup, beverage component, or the like. [Para 127] The door access unit 26 may also be able to scan identification codes of the item itself, such as UPC barcodes or the like. This can be used by the customer to verify the identity of a product and its price. This may also be used in order to return an item to its appropriate spot within the bay 22. Such scanning of items may also be used during inventory fulfillment. Scanning a product's UPC or other code may also display additional information about the product, such as ingredients or other characteristics. [Para 1 28] In some cases, the system may require the customer who desires to return an item to a tray of a bay to scan the UPC or any other type of identification code that is printed on the item, by using a nearby scanner, in order to facilitate the return and to remove that item from the customer's shopping cart. At that time, the customer will be required to open the door and scan the UPC or any other type of identification code that is printed on the item and then return the product or item to its proper location on the tray. The system will verify that the scanned item matches the product's assignment which is associated with the compartment that the item was returned to and issue the credit to the customer. If the customer wishes to return a product or item after the transaction is closed, the return may require customer service assistance. Emphasis provided. Regarding Claim 1, Hay further teaches at paragraph 19 (a camera may be associated with each bay for taking images of the customer while the customer is accessing each bay), paragraph 20 (a camera disposed above each tray of the bay may also be used as part of the sensor system wherein an image recognition system detects the presence or removal of an item from the tray by comparing photographic images), paragraph 45 (using cameras or other sensors), paragraph 61 (array of camera sensors) and paragraph 63 (multiple cameras used). Hay further states at paragraph 22, at the last three sentences, that “[a] plurality of sensors are arranged to correspond with the items held by the trays”, that “[u]se of an array of fixed sensors may be used, or sensors may be used which are manually placed in relation to each tray compartment and/or item within the bay”. Regarding Claim 1, Hay does not expressly teach sensing by a first non-vision sensor, removal or return of a first item from a first region; verifying that the second removal or return of the first item by the first non-vision sensor and the sensed removal or return of the second item by the computer vision sensor correspond to a single event of removal or return of an item by a consumer, wherein the verifying comprises determining an accuracy score that removal or return of the first item by the first non-vision sensor and the sensed removal or return of the second item by the computer vision sensor correspond to a single event; applying a purchase price of the item against an account of the consumer purchase item when the accuracy score exceeds a threshold, thereby completing the purchase transaction. Regarding Claim 1, Hay does not expressly teach, but Zalewski teaches wherein the first sensor and the computer vision sensor are different types of sensors, i.e., such as both a weight sensor and an image sensor/camera, for example, as illustrated at figure 51b and as mentioned at col. 11, lines 19-43, and col. 127, lines 7-22, which state as follows. (91) The systems can further process historical interaction by the user, or historical interaction by other users, to determine possibilities that items will be purchased and/or taken from a shelf. As will be described below, a number of learning processes may be executed by local and/or cloud computing, in order to make assumptions and predictions. This information may be beneficial information to the shopper, e.g., by providing the user/shopper with real-time information regarding items. The information can be custom tailored for the specific user, e.g., by custom display screens placed proximate to items or goods, and in some embodiments, audio can be provided to the user, providing custom information regarding an item or good. In still other embodiments, the tracking of users and tracking of items can share sensors, e.g., sensor fusion, to make determinations that are validated as true. For example, multiple sensors can be used to determine if a user has actually picked up an item from shelf. Some sensors may be weight sensors, some may be image sensors, e.g., cameras, and some may be motion sensors. It should be understood that the sensors that can be fused can be many, and some embodiments may use weighting to determine in real-time which sensors are most important for a specific interaction and which sensors may provide the least reliable data. The weighting can be dynamic, as the way users interact with items on shelves may be different. (671) For purposes of completeness, FIG. 51B illustrates various sensors. By way of example and without limitation to others, a camera 5116 is provided to capture image data and the camera(s) 5116 may be placed in various locations. Infrared sensors 5128 may also be integrated in and around the shelf in order to detect changes within the volume or area being tracked. Ultrasonic sensor 5120 may also be integrated in various locations in and around the shelf 5104. Motion sensors 5126 may also be placed at various locations, to detect motion of either the user's hand or hands interfacing with items or the items themselves moving on the shelf. Weight sensors 5112 can also be placed in various locations, so as to detect motion and movement of specific items on the shelf 5104, and differentiate between which items are being picked or interacted with when on the shelf or when placed back on the shelf. Emphasis provided. Regarding Claim 1, Hay does not expressly teach, but Zalewski teaches sensing by a first non-vision sensor, i.e., weight sensors (WS, 5112), Infrared sensors (IRS, 5128), Ultrasonic sensors (USS, 5120) and proximity sensors (5204, 5114), volume sensed/area sensing (5124) and motion sensors (5126), as illustrated in figure 51a and 51b, as mentioned at col. 11, lines 19-43 (includes sensor fusion), col. 12, lines 30-62, col. 15, lines 25-39 (weight sensors), col. 17, lines 51-61 (sensors can be either image or non-image type sensors), col. 26, lines 15-33 (density sensors), col. 124, lines 7-26, col. 125, line 59-col. 126, line 6, col. 126, lines 7-39 (weight sensors), col. 128, lines 29-49 (weight sensors) and col. 129, lines 29-40, for example, removal or return of a first item from a first region, i.e., shelf (5104); verifying that the second removal or return of the first item by the first non-vision sensor (WS, 5112), Infrared sensors (IRS, 5128), Ultrasonic sensors (USS, 5120) and proximity sensors (5204, 5114), volume sensed/area sensing (5124) and motion sensors (5126), and the sensed removal or return of the second item by the computer vision sensor, i.e., (5116, 5118, 5302, 5602, 5610), as mentioned at col. 123, lines 40-56, col. 123, line 57-col. 124, line 6, col. 127, lines 7-40, correspond to a single event of removal or return of an item by a consumer, wherein the verifying comprises determining an accuracy score that removal or return of the first item by the first non-vision sensor and the sensed removal or return of the second item by the computer vision sensor (5116, 5118) correspond to a single event, i.e., noting the mention of the use of confidence/accuracy scores at col. 18, lines 24-45 and col. 18, line 59-col. 19, line 5, and noting the mention of sensor fusion, at col. 11, lines 19-43, for example; applying a purchase price of the item against an account of the consumer purchase item when the accuracy score exceeds a threshold, thereby completing the purchase transaction, as mentioned at col. 18, lines 24-col.19, line 5, for example, which states as follows. (121) These determinations can then be analyzed using confidence scores, machine learning, artificial intelligence, neural networks, and/or other processing. In operation 22, the user can be charged for the item in the shopping cart, when the item has been determined to have been removed from the store or area by the determination in operation 20. The charging will occur in the form of a cashier-less process, which does not require the user to check out with a person or automated cashier system. (122) FIG. 1E illustrates another example process, which may be processed in order to determine when an item should be charge to the user, in a cashier-less purchase transaction. In operation 40, a server receives sensor data regarding items associated with store shelves of the store. In this example, the server may be local or remote to the store. The server may be a process this executed by an application, or multiple applications. The server may be a process that's executed by one or more IOT devices, and/or process by one or more compute nodes of the cloud processing system. The cloud processing system may be utilizing the data center that is assigned one or more computers for processing transactions associated with tracking items on store shelves. (123) In operation 42, the server can receive interaction data of an item of the shelf of the store by the user. The interaction database configured to identify a type of item. The type of the item can be, for example, cereal of a particular brand, a package of a particular type, the good of the particular kind, or any other type of good or product or item that can be sold in a store. Again as mentioned above, the item does not have to be related to the grocery item, and any retail item can be tracked using one or more sensors. In one embodiment, the interaction data will identify the item type, as certain items can be associated with locations throughout the store, or on the shelf. For instance, milk may be stored in particular shelves that are refrigerated. (124) Dry good products can be stored in other types of shelves. Produce can be stored in yet other different types of shelves or areas. Shirts and pants can be stored in different types of shelving or compartments in a store. Electronic devices can be stored or organized in different locations in the store. With this in mind, the type of item being tracked can be associated with different types of sensors, and those type sensors can be optimized to track the interaction data with that specific item and identify the item that may be associated with the sensors. In one embodiment, the item can be added to an electronic shopping cart of the user having a user account for processing a cashier lists purchase transaction. This may occur if the user's interaction causes the item to be taken from the shelf. Emphasis provided. Regarding Claim 1, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to have provided sensing by a first non-vision sensor, removal or return of a first item from a first region; verifying that the second removal or return of the first item by the first non-vision sensor and the sensed removal or return of the second item by the computer vision sensor correspond to a single event of removal or return of an item by a consumer, wherein the verifying comprises determining an accuracy score that removal or return of the first item by the first non-vision sensor and the sensed removal or return of the second item by the computer vision sensor correspond to a single event; applying a purchase price of the item against an account of the consumer purchase item when the accuracy score exceeds a threshold, thereby completing the purchase transaction, as taught by Zalewski, in Hay’s system for the purpose of automatically determining particular items based on multiple sensor data verification and confidence scoring so that they can be automatically charged to a user’s/customer’s account thus completing the transaction. Further regarding Claim 1, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to have provided wherein the first sensor and the computer vision sensor are different types of sensors, as taught by Zalewski, in Hay’s method and system for conducting a purchase, for the purpose of obtaining more and varying types of data for more efficiently, accurately and independently determining the classification of transactions and events surrounding said transactions. Regarding Claim 6, Hay teaches wherein sensing, by a computer vision sensor (1910), removal or return of a second item from the first region comprises identifying the second item, as mentioned at paragraph 21, for example. Regarding Claim 7, Hay teaches wherein identifying the second item comprises identifying the second item from a finite list of possible items offered in the first region, as mentioned at paragraphs 135, 136 and 150, noting the mention of identification of an item from the local database and preconfigured database information, for example. Regarding Claim 8, Hay teaches wherein verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to a single event of removal or return of an item by a consumer comprises locally verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event, as mentioned at paragraphs 9, 14, 94 and 128, for example. Regarding Claim 9, Hay teaches wherein verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to a single event of removal or return of an item by a consumer comprises remotely verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event, as mentioned at paragraphs 9, 14, 41, 128 and 148, for example. Regarding Claim 10, Hay teaches wherein remotely verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event comprises: transmitting, to a backend office, noting paragraph 22 mentions “tray and item placement in each bay may be configured from a remote location” in the second to last sentence, information associated with the sensed removal or return of the first item from the first region; and transmitting, to a backend office, information associated with the sensed removal or return of the second item from the first region, as mentioned at paragraphs 14, 31 and 128, and noting the central control center or main control network facility (MCNF 36) mentioned in paragraph 93, for example. See also paragraph 148 mentioning a remote computing system and paragraph 207, second sentence, i.e., “[a]dditionally, such technique allows a local and remote reading of all trays in order to determine the existence of products and create a real time "on-the-shelf" inventory report”. Regarding Claim 11, Hay teaches wherein transmitting, to a backend office, information associated with the sensed removal or return of the second item from the first region comprising transmitting a video of the removal or return of the second item from the first region, as mentioned at paragraphs 14, 31, 93, 128, 148 and 207. Regarding Claim 12, Hay does not expressly teach wherein remotely verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event comprises verifying, by a machine learning tool, that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event. Regarding Claim 12, Hay does not expressly teach, but Zalewski teaches wherein remotely verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event comprises verifying, by a machine learning tool, that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event, as mentioned at col. 19, lines 44-64 mentioning using “a machine learning model to classify shopping behavior and track events”. See also col. 2, lines 35-65, col. 14, lines 12-24 and col. 16, lines 6-26, further mentioning use of machine learning in this context, for example. Regarding Claim 12, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to have provided wherein remotely verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event comprises verifying, by a machine learning tool, that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event, as taught by Zalewski, in Hay’s method for conducting a purchase, for the purpose of more efficiently, accurately and independently determining the classification of transactions and events surrounding said transactions. Regarding Claim 13, Hay teaches wherein applying a purchase price of the item against an account of the consumer for purchase of the item comprises delaying the applying the purchase price of the item against the account of the consumer until after the remotely verifying, as illustrated in figure 10 and as mentioned at abstract and paragraphs 10, 81, 95, 96, 108-110, 127 and 146, for example. Note also that paragraphs 24 and 25 mention verification of identification and payment information before purchasing an item, as mentioned in paragraph 27, which states as follows. [Para 27] After the customer has been identified, the customer's payment information verified, and the customer qualified, the customer is then allowed to shop at the automated store outlet. This requires unlocking a door to provide access to a bay. Typically, this requires that the customer provide identification information to an electronic unit associated with a lock at each bay door. This may be by means of an electronic unit which performs multiple scanning and reading and display functions simultaneously, including a motion detector, an electronic display screen, a keypad, two-way audio visual communication means, RFID reader, electronic code image reader, and a biometric scanner. The customer identification is verified before unlocking the door. Once the door is unlocked and opened, the system tracks the customer's activity at that bay. Emphasis provided. Regarding Claim 14, Hay teaches further comprising correcting either the sensed removal or return of the first item or the sensed removal or return of the second item in response to the verifying, as mentioned at paragraphs 128-130 and 136, which state as follows. [Para 128] In some cases, the system may require the customer who desires to return an item to a tray of a bay to scan the UPC or any other type of identification code that is printed on the item, by using a nearby scanner, in order to facilitate the return and to remove that item from the customer's shopping cart. At that time, the customer will be required to open the door and scan the UPC or any other type of identification code that is printed on the item and then return the product or item to its proper location on the tray. The system will verify that the scanned item matches the product's assignment which is associated with the compartment that the item was returned to and issue the credit to the customer. If the customer wishes to return a product or item after the transaction is closed, the return may require customer service assistance. [Para 129] The activity history of each sensor is saved on the local computer system (LCS) of the ASO. Therefore, at any time a specific sensor report can be generated showing all removals and/or replacements associated with such sensor as well as the information of the customer who removed the item or replaced the item and the date and time that this occurred. [Para 130] If the system detects suspicious behavior, the system may use audio and /or visual messages requesting that the customer scan the one or more items that were removed from the trays by using the nearby scanner, in order to confirm the accuracy of the transaction. Such a request may be with a live customer service agent. [Para 136] If the customer returns the item 70 to the tray 28, the system in real time removes that item from the customer's virtual shopping cart. In some cases, if the customer wishes to return the item to the tray before closing the door, the customer must return the item to the same location that the item was removed from. In some cases, if the customer does not return the item to the right location, credit may not be issued for such return and an alarm and /or voice announcement will request the customer to move the item to the correct location. The misplaced item will be logged and if not corrected by the customer, such compartment's product may be automatically updated by the software. If the software is unable to update the database, the software issues a notice to the maintenance crew indicating such misplaced product. In the use of some sensors, such as RFID tags or the like, the location of each product is less critical and such functions are modified accordingly based on the situation of each occurrence. Emphasis provided. Regarding Claim 15, Hay teaches further comprising adjusting the applying a purchase price of the item against an account of the consumer for purchase of the item in response to the verifying, as mentioned at paragraphs 128-130 and 136. Regarding Claim 16, Hay teaches further comprising correcting either the sensed removal or return of the first item or the sensed removal or return of the second item in response to the verifying, as mentioned at paragraphs 128-130, 136, 162 and 164. Regarding Claim 17, Hay does not expressly teach further comprising feeding back the corrected sensed removal or return of the first item or the sensed removal or return of the second item to the machine learning tool for training the machine learning tool. Regarding Claim 17, Hay does not expressly teach, but Zalewski teaches further comprising feeding back the corrected sensed removal or return of the first item or the sensed removal or return of the second item to the machine learning tool for training the machine learning tool, as taught by Zalewski, and as mentioned at Hay paragraphs 128-130, 134-136, 162 and 164. Regarding Claim 18, Hay teaches wherein verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to a single event of removal or return of an item by a consumer comprises determining an accuracy of the verifying, as mentioned at paragraphs 14, 36, 44,128, 130 and 144, for example. Regarding Claim 19, Hay teaches wherein verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event comprises remotely verifying that the sensed removal or return of the first item and the sensed removal or return of the second item correspond to the single event when the accuracy of the verifying is below an accuracy threshold, as mentioned at paragraphs 14, 36, 44,128, 130, 144, 148, 162 and 164, for example. Regarding Claim 20, Hay teaches a system for conducting a purchase transaction, as illustrated in figure 6, for example, comprising: a first sensor, i.e., camera (1102) as mentioned at paragraph 122 and as illustrated in figure 11 and light strip (1200) with LED lights (1202, 1204) as mentioned at paragraph 131 and as illustrated in figure 12, that senses removal or return of a first item from a first region, as mentioned at paragraphs 135 and 136; a computer vision sensor, i.e., image recognition system incorporating cameras as mentioned at paragraph 14, or camera 1910 as illustrated in figure 19 and as mentioned at paragraph 194, that senses removal or return of a second item from the first region, as mentioned at paragraphs 135, 136, for example; a transaction detector, i.e., main control network facility/MCNF (36) as illustrated in figure 4, and as mentioned at paragraphs 92-94, for example, that determines an accuracy that the sensed removal or return of the first item by the first sensor (1102, 1200) and the sensed removal or return of the second item by the computer vision sensor (1910) correspond to a single event of removal or return of an item by a consumer, as mentioned at paragraphs 134, 135 and 136, for example, and that forwards, to a machine learning tool, as taught by Zalewski, information associated with the sensed removal or return of the first item by the first sensor (1102, 1200) and information associated with the sensed removal or return of the second item by the computer vision sensor (1910) when the accuracy is less than an accuracy threshold, as mentioned at paragraphs 36, 44, 130 and Claims 30 and 34, for example; the machine learning tool, as taught by Zalewski, that verifies or corrects the information associated with the sensed removal or return of the first item or the information associated with the sensed removal or return of the second item and provides verified or corrected information to the transaction detector (36), as mentioned at paragraphs 36, 44, 130 and Claims 30 and 34, for example; and an automated billing processor, i.e., local computing system (LCS) and main control board (90) as illustrated in figure 7, coupled to the transaction detector (36), that applies a purchase price of the item against an account of the consumer for the item based on the verified or corrected information, thereby completing the purchase transaction, as mentioned at paragraphs 128-130, 134-136, 162 and 164, for example. Hay further states at paragraph 22, at the last three sentences, that “[a] plurality of sensors are arranged to correspond with the items held by the trays”, that “[u]se of an array of fixed sensors may be used, or sensors may be used which are manually placed in relation to each tray compartment and/or item within the bay”. Regarding Claim 20, Hay does not expressly teach a first non-vision sensor and a computer vision sensor. Regarding Claim 20, Hay does not expressly teach, but Zalewski teaches a first non-vision sensor, i.e., such as both a weight sensor and a computer vision sensor, i.e., image sensor/camera, for example, as illustrated at figure 51b and as mentioned at col. 11, lines 19-43, and col. 127, lines 7-22, which state as follows. (91) The systems can further process historical interaction by the user, or historical interaction by other users, to determine possibilities that items will be purchased and/or taken from a shelf. As will be described below, a number of learning processes may be executed by local and/or cloud computing, in order to make assumptions and predictions. This information may be beneficial information to the shopper, e.g., by providing the user/shopper with real-time information regarding items. The information can be custom tailored for the specific user, e.g., by custom display screens placed proximate to items or goods, and in some embodiments, audio can be provided to the user, providing custom information regarding an item or good. In still other embodiments, the tracking of users and tracking of items can share sensors, e.g., sensor fusion, to make determinations that are validated as true. For example, multiple sensors can be used to determine if a user has actually picked up an item from shelf. Some sensors may be weight sensors, some may be image sensors, e.g., cameras, and some may be motion sensors. It should be understood that the sensors that can be fused can be many, and some embodiments may use weighting to determine in real-time which sensors are most important for a specific interaction and which sensors may provide the least reliable data. The weighting can be dynamic, as the way users interact with items on shelves may be different. (671) For purposes of completeness, FIG. 51B illustrates various sensors. By way of example and without limitation to others, a camera 5116 is provided to capture image data and the camera(s) 5116 may be placed in various locations. Infrared sensors 5128 may also be integrated in and around the shelf in order to detect changes within the volume or area being tracked. Ultrasonic sensor 5120 may also be integrated in various locations in and around the shelf 5104. Motion sensors 5126 may also be placed at various locations, to detect motion of either the user's hand or hands interfacing with items or the items themselves moving on the shelf. Weight sensors 5112 can also be placed in various locations, so as to detect motion and movement of specific items on the shelf 5104, and differentiate between which items are being picked or interacted with when on the shelf or when placed back on the shelf. Emphasis provided. Regarding Claim 20, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to have provided a first non-vision sensor and a computer vision sensor, as taught by Zalewski, in Hay’s method and system for conducting a purchase, for the purpose of obtaining more and varying types of data for more efficiently, accurately and independently determining the classification of transactions and events surrounding said transactions. Regarding Claim 20, Hay does not expressly teach a first non-vision sensor and does not mention using an accuracy score. Regarding Claim 20, Hay does not expressly teach, but Zalewski further teaches a first non-vision sensor , i.e., weight sensors (WS, 5112), Infrared sensors (IRS, 5128), Ultrasonic sensors (USS, 5120) and proximity sensors (5204, 5114), volume sensed/area sensing (5124) and motion sensors (5126), as illustrated in figure 51a and 51b, as mentioned at col. 11, lines 19-43 (includes sensor fusion), col. 12, lines 30-62, col. 15, lines 25-39 (weight sensors), col. 17, lines 51-61 (sensors can be either image or non-image type sensors), col. 26, lines 15-33 (density sensors), col. 124, lines 7-26, col. 125, line 59-col. 126, line 6, col. 126, lines 7-39 (weight sensors), col. 128, lines 29-49 (weight sensors) and col. 129, lines 29-40, for example, and mentions using an accuracy score, i.e., noting the mention of the use of confidence/accuracy scores at col. 18, lines 24-45 and col. 18, line 59-col. 19, line 5, and noting the mention of sensor fusion, at col. 11, lines 19-43, for example. Regarding Claim 22, see the rejection of Claims 1 and 20, noting Zalewski’s weight sensor (WS, 5112), as mentioned above. Regarding Claim 24, see the rejection of Claims 1, 20 and 22, above. Response to Arguments Applicant’s arguments with respect to Claim(s) 1, 6-20, 22 and 24 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Therefore, Claims 1, 6-20, 22 and 24 are rejected. Conclusion Applicant is encouraged to contact the Examiner should there be any questions about this rejection or in an endeavor to explore potential amendments or potential allowable subject matter. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Regarding Claim 1, Guack ‘689 is cited as teaching sensing by a first non-vision sensor, i.e., weight sensors (1809), proximity sensor (1805) and IR curtain (1807), as mentioned at paragraph 341 and as illustrated in figure 18, for example, removal or return of a first item from a first region; verifying that the second removal or return of the first item by the first non-vision sensor (1805, 1807, 1809), and the sensed removal or return of the second item by the computer vision sensor, i.e., camera (1803), as mentioned at paragraph 341-344 and as illustrated in figure 18, correspond to a single event of removal or return of an item by a consumer, wherein the verifying comprises determining an accuracy score that removal or return of the first item by the first non-vision sensor and the sensed removal or return of the second item by the computer vision sensor correspond to a single event; applying a purchase price of the item against an account of the consumer purchase item when the accuracy score exceeds a threshold, thereby completing the purchase transaction. See paragraphs 341-344, as follows. [0341] In some implementations, the edge system 1801 (or edge node) comprises a compute node, vision node 1803, shelves with weight sensors 1809, proximity sensor 1805, and an IR curtain 1807. In some implementations, the edge system 1801 obtains a stream from all the various sensor nodes. These data streams can consist of raw data or pre-processed and filtered stream of events. In some implementations, the edge node 1801 obtains these streams and processes them with algorithms to make decisions by taking into account all of the data from the camera 1803, weight sensor 1809, and IR shadow detector 1807. After fusing the data with fusion algorithms, the edge system 1801 predicts events using machine learning which be used for decision making. [0342] In some implementations, the vision node 1803 comprises a camera where streams of frames obtained from the camera can be used with computer-vision algorithms to estimate a user's location in a 3D space, identification of the user, re-identification of the user, and recognizing actions of the users and objects. [0343] In some implementations, the shelves with weight sensors 1809, as explained in more detail above in FIGS. 15-16, are configured to identify objects and interaction with objects such as the pickup and return of items 1811 into the bin. In some implementations, the weight sensors 1809 allow for accurate detection of stock and detects item pick-up and return. In some implementations, the weight sensors 1809 are configured to also quantify the number of items after the item is identified. The streams from these weight sensors 1809 on the shelves can be fed to the edge system 1801 for further processing. The method of using weight sensors 1809 to identify items and predict events using machine learning is explained above in FIG. 17d. [0344] In some implementations, the proximity sensors 1805 or vision nodes 1803 are configured to detect the start and end of events in the shelf unit by triggering the cameras to detect movements to turn the edge system 1801 into a low-power or sleep mode to save power. In addition, the proximity sensors 1805 can be used count items 1811 depending on a distance from the back of the shelf similar to how a vending machine works. The method of using proximity sensors 1805 to identify items and predict events using machine learning is explained above in FIG. 17d. [0345] In some implementations, IR light curtains 1807 are placed around the boundaries of shelves facing towards each other to detect events inside the shelf bins. In some implementations, the IR light curtain 1807 contain a pair of IR light and IR detector diode organized in a serial configuration facing one another. As an example, the edge system 1801 can recognize a location of a hand inserted into the shelve because the user's hand cuts the path of the IR light and creates a shadow over the IR detector. In some implementations, the IR light curtain 1807 works by recognizing whether there is a break between the IR light and the IR receiver. If there is a break between the IR light and the IR receiver then the edge system 1801 knows that a user has entered his hand within the bin of the shelf (as depicted by the dotted line). Emphasis provided. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEFFREY ALAN SHAPIRO whose telephone number is (571)272-6943. The examiner can normally be reached Monday-Friday generally between 8:30AM and 6:30PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anita Y Coupe can be reached on 571-270-3614. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JEFFREY A SHAPIRO/Primary Examiner, Art Unit 3619 April 3, 2026
Read full office action

Prosecution Timeline

May 19, 2022
Application Filed
Mar 21, 2025
Non-Final Rejection — §103
Sep 26, 2025
Response Filed
Oct 20, 2025
Final Rejection — §103
Mar 23, 2026
Request for Continued Examination
Mar 29, 2026
Response after Non-Final Action
Apr 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12583542
BICYCLE PARKING DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12567298
A COIN FEEDING UNIT, A MODULE COMPRISING SAID COIN FEEDING UNIT, AND A COIN HANDLING MACHINE
2y 5m to grant Granted Mar 03, 2026
Patent 12562021
VEHICLE TREATMENT ARCH WITH PRESSURE DIFFERENTIAL INDICATION SYSTEM AND TOOL ENGAGEMENT SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12562017
A COIN APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555478
SYSTEM AND METHOD FOR REALTIME COMMUNITY INFORMATION EXCHANGE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
70%
With Interview (+15.7%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 881 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month