Prosecution Insights
Last updated: April 19, 2026
Application No. 18/504,397

UNMANNED PAYMENT METHOD USING MOBILE ROBOT AND UNMANNED PAYMENT SYSTEM USING SAME

Final Rejection §101§103
Filed
Nov 08, 2023
Examiner
LUDWIG, PETER L
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Xyz Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
4y 0m
To Grant
60%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
193 granted / 540 resolved
-16.3% vs TC avg
Strong +25% interview lift
Without
With
+24.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
60 currently pending
Career history
600
Total Applications
across all art units

Statute-Specific Performance

§101
23.7%
-16.3% vs TC avg
§103
36.1%
-3.9% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 540 resolved cases

Office Action

§101 §103
DETAILED ACTION This Final Office action is in response to Applicant’s Amendment on 02/25/2026. Claims 1-3, 5-12, and 14-18 are pending. The effective filing date of the claimed invention is 09/24/2021. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 5-12, 14-18 are rejected under 35 U.S.C. 101 because the claims are directed to abstract idea, without significantly more. Step 1 – Claims 1-3, 5-9 are process claims; claims 10-12, 14-18 are machine claims. Step 1 is satisfied. Step 2A, Prong 1 – Exemplary claim 1 (and similarly claim 10) recites the following abstract idea: An unmanned payment method using a mobile robot in a store, the unmanned payment method comprising: acquiring an image (see e.g. MPEP 2106.04(a)(2)(III)(A) a claim to “collecting information [e.g. acquiring data], analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016);); recognizing a product, selected and taken by a customer, using the acquired image (see e.g. MPEP 2106.04(a)(2)(III)(A) a claim to “collecting information [e.g. acquiring data], analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); MPEP 2106.04(a)(2)(II)(A-B) fundamental economic practice); recognizing a moving direction of the product taken by the customer (see MPEP 2106.04(a)(2)(III)(C); MPEP 2106.04(a)(2)(II)(A-B)) identifying a first table related to the customer based on the moving direction of the product (see e.g. MPEP 2106.04(a)(2)(III)(A) a claim to “collecting information [e.g. acquiring data], analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); MPEP 2106.04(a)(2)(II)(A-B) fundamental economic practice); and calculating a payment amount for the product for the identified table (see e.g. MPEP 2106.04(a)(2)(III)(A) a claim to “collecting information [e.g. acquiring data], analyzing it, and displaying certain results of the collection and analysis,” where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); MPEP 2106.04(a)(2)(II)(A-B) fundamental economic practice; MPEP 2106.04(a)(2)(I) math). When these abstract idea concepts are viewed alone and in ordered combination (as a whole), the examiner finds that the claim 1 (and 10) recites abstract idea. Step 2A, Prong 2 – Exemplary claim 1 (and similarly claim 10) recites the following additional limitations: autonomously moving a mobile robot (see MPEP 2106.05(a)(I) Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential); in other words, Applicant has not provided in the Spec or otherwise any improvement in the autonomous movement of mobile robot art. This is similar to “apply it” in MPEP 2106.05(f) and generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h).); Camera sensor that acquires images (see e.g. MPEP 2106.05(a)(I) Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: iv. Recording, transmitting, and archiving digital images by use of conventional or generic technology in a nascent but well-known environment, without any assertion that the invention reflects an inventive solution to any problem presented by combining a camera and a cellular telephone, TLI Communications, 823 F.3d at 611-12, 118 USPQ2d at 1747; Recognizing a product selected by the customer (see e.g. MPEP 2106.04(a)(I) Similarly, a claimed process covering embodiments that can be performed on a computer, as well as embodiments that can be practiced verbally or with a telephone, cannot improve computer technology. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1328, 122 USPQ2d 1377, 1381 (Fed. Cir. 2017)) For claim 10, see camera sensor/image acquirer, recognition processor, payment processor (see MPEP 2106.05(f) mere instructions to apply an exception, or “apply it” rationale, where these one or more processors are configured to perform the identified abstract). When viewed alone and in ordered combination (as a whole), these additional limitations are found not to integrate with practical application, and therefore claims 1 and 10 are found to be directed to abstract idea. Step 2B – The examiner does not find the claim 1 (or 10) to recite significantly more. The analysis of the additional limitations from Step 2A, Prong 2 is equally applicable here in Step 2B. Furthermore, according to MPEP 2106.05(d), Another consideration when determining whether a claim recites significantly more than a judicial exception is whether the additional element(s) are well-understood, routine, conventional activities previously known to the industry. This consideration is only evaluated in Step 2B of the eligibility analysis. The courts have recognized the following computer functions as well‐understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity. i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); ii. Performing repetitive calculations, Flook, 437 U.S. at 594; iii. Electronic recordkeeping, Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 225, 110 USPQ2d 1984 (2014) (creating and maintaining “shadow accounts”); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log); iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; The claim 1 and 10 include limitations relating to receiving/transmitting image(s), calculating a payment amount, and acquiring stored data, all of which are found to be well-understood, routine, conventional. Accordingly, the examiner finds that claims 1 and 10 are directed to abstract idea. Dependent Claims – Claims 2 and 11 includes more abstract idea of recognizing an item by comparing data. See MPEP 2106.04(a)(2)(III). Claims 3 and 12 includes more abstract idea of identifying data. See MPEP 2106.04(a)(2)(III); MPEP 2106.04(a)(2)(II)(A-B). Claims 5 and 14 includes more abstract idea. See MPEP 2106.04(a)(2)(II)(A-B). Claims 6 and 15 includes more abstract idea. See MPEP 2106.04(a)(2)(I) citing Flook. Claims 7 and 16 includes additional limitations of proximity sensors that act according to standard proximity sensors, and are therefore tools to implement the abstract idea and “apply it” rationale. See MPEP 2106.05(f). Claims 8 and 17 include more abstract idea. See MPEP 2106.04(a)(2)(III). Claims 9 and 18 include a standard payment process (abstract idea, MPEP 2106.04(a)(2)(II)(A-B) with a payment means such as a phone of customer. See MPEP 2106.05(f) “apply it”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 9-11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. Pub. No. 2020/0290208 to Ha et al. (“Ha”), in view of U.S. Pat. Pub. No. 2014/0330654 to Turney et al. (“Turney”), in further view of U.S. Pat. Pub. No. 2015/0039458 to Reid (“Reid”). PNG media_image1.png 671 360 media_image1.png Greyscale With regard to claims 1 and 10, Ha discloses the claimed unmanned payment method using a mobile robot in a store, the unmanned payment method (see e.g. [0057], [0017-21] Fig. 6-7 - Ha discloses a service robot that autonomously navigates to and services tables in a restaurant environment (a store), including generating control signals to stop at a table and depart based on criteria.) comprising: autonomously moving a mobile robot (see e.g. [0099] This allows the processing unit 130 to detect a moving object, e.g., a person, and to operate the robot 100 to stop or to go around the person.); acquiring an image through a camera sensor installed above a plate of the mobile robot (see e.g. abstract, see Fig. 7 (above) and [0089], “the capturing of the image by the camera may be performed in response to a change in the weight supported by the tray 200. For example, when an item has been placed on the tray 200, and/or when an item has been removed from the tray 200, the processing unit 130 will receive an input from the weight sensor 210 indicating the corresponding weight being sensed by the weight sensor 210. The processing unit 130 may then determine whether there is a weight change, and if so, the processing unit 130 may generate a signal to cause the camera to take an image. Alternatively, the camera may be configured to continuously generate images (e.g., a video) of the spatial region above the tray 200. In such cases, the generating of the images will not be based on any sensed weight. The processing unit 130 in such cases may be configured to analyze the images in real time, and determine whether an item has been placed on the tray 200, or removed from the tray 200.”); recognizing a product, selected and taken by a customer, using the acquired image (see e.g. Fig. 7 and [0089], “the camera may be configured to continuously generate images (e.g., a video) of the spatial region above the tray 200. In such cases, the generating of the images will not be based on any sensed weight. The processing unit 130 in such cases may be configured to analyze the images in real time, and determine whether an item has been placed on the tray 200, or removed from the tray 200.”); recognizing a moving direction of the product taken by the customer (Ha discloses an optical camera configured to view the spatial region above the food-supporting surface, so as the customer grabs the item the system takes pictures and as the frames progress the movement of the item and direction would be recognized. See also Reid, abstract, [0018] Preferably each C gesture is analyzed with three elements initiated by a customer. For example, the customer initiated steps of: 1. pick-up, 2. motion, and 3. drop. Pick-up is when a product is lifted from a location such as a shelf. Motion is identified when the picked-up product is moved. Drop is identified when the moved product is delivered to the container such as a shopping cart., Reid [0020] [0029] The wirelessly networked mobile electronic devices enable communication between shoppers' mobile devices, the panel, and also with nearby 3D depth sensors, cameras or other remote sensors. The sensors allow for multiple perspectives to track movement of customers in the centimeter range. Mobile devices also enable another way to identify customers and track customer movement through a retail store via indoor localization methods. Analytical data regarding customer shopping patterns can be compiled and used by the retail store and others to maximize store design and presentation of products to customers. Reid [0035] [0093] [0095] The server 30 includes appropriate hardware and software to enable observation through the panel 38 of movement and identification of shoppers and items within a store.; for direction see Reid at e.g. [0100] The return recognizer 50 detects movement of an item out of the container (i.e. cart) and back to any shelf or other location in a store. This shows that the direction (shelf to cart, or, cart to shelf) is tracked and monitored, and is taken into account during the billing process); identifying a first table related to the customer based on the moving direction of the product (Ha does not disclose this limitation. For the identification of a table, see Turney below. The examiner finds that the identification of a cart or account of the user is similar to and, when combined with Turney, one of ordinary skill in the art would understand that the “table” is simply a designator for the billing aspect, such as I will bill everything at the table on the same invoice, is similar to, I will bill everything in the shopping cart/container on the same invoice. Reid teaches [0015] The sensor-equipped server identifies the container transported by the customer. Although a shopping cart is shown in FIG. 1, the container can be any identifiable container such as a bag, basket, box, a clothing pocket, or even a hand of the customer; Reid [0017] In one embodiment, each C gesture is recorded in a continuous tally that is communicated to the customer via an electronic device attached to a shopping cart, or an electronic device carried by the customer. In another embodiment, a batch tally is provided at check out. Both tally types may be sequentially employed in conjunction with the present invention. Reid [0025] The electronic shopping cart is a table within a database that stores prices, products identification and description data associated with customer's shopping choices; Reid [0034] Both product identity and weight are sent to the panel, where the item total is tallied and added to the shopper's electronic cart. Reid [0075]; Reid [0100-102] discusses movement of item from cart/table to shelf, a reverse direction from the purchase gesture, where the direction is away from the cart, not towards the cart, and when in reverse movement direction, this causes the subtraction of the amount from the cart/table tally); and calculating a payment amount for the product for the identified table (Ha provides triggers/criteria based on sensor inputs (e.g., weight change) to proceed with actions after servicing a table. See ¶[0017]-[0019]; However, Ha does not explicitly disclose “calculating a payment amount for the product for the identified table”; Turney performs bill calculation per item and enables payment against the table’s ticket. See abstract [0002], [0010]-[0012], Fig.3.; [0007] [0018] Identifying a ticket associated with the user includes receiving data from the mobile device identifying a table of the user; [0037] table indication; [0063] table; [0072] In some examples, the restaurant, the table number, or both can be automatically identified by the mobile payment system, e.g., based on the user's interactions with the PNG media_image2.png 463 261 media_image2.png Greyscale restaurant. The mobile payment system links a username or identifier of the user in PNG media_image3.png 434 246 media_image3.png Greyscale the mobile payment system with an identifier of the restaurant, the user's table number, and a timestamp; [0084]; Fig. 15 Fig. 19C calculating payment amount; see also Reid as referred to above, calculating total for cart/table). Therefore, it would have been obvious to one of ordinary skill in the restaurant ordering/payment art before the effective filing date of the claimed invention to modify Ha’s system that identifies item removal (i.e. item order) to include table identification of use (as shown in Turney) and further to calculate a payment amount for the product for the table (as shown in Turney, where receipt and payment confirmation is associated with table, and includes payment amount), where the advantage of combining Turney into Ha is that payment amount is calculated and payment is allowed to be performed by the user for the items selected, as shown throughout Turney. Furthermore, the examiner has brought in the Reid reference. The examiner finds that it would have been obvious to one of ordinary skill in the commerce arts before the effective filing date to modify the combination of Ha and Turney, as shown above, with the ability to recognize movement direction of an item, via hand for instance, and then identify that the user has selected that item for purchase, and associate that item with the user and add that item to an amount owed by the customer, and further where this can happen due to reverse movement of the item, and where the item can be removed from the cart/table and returned, where the item is disassociated with the user account and the item is subtracted from the invoice tally, as shown in Reid, where this is beneficial in that it “enables customers to purchase goods in a retail store without having to queue for checkout. What is also desired are improved ways to manage inventories, and prevent theft.” Reid [0007] See also, the improvement found in Reid [0052] In the case of P, a level of tailoring and personalization can be achieved by treating the retail space itself like a large 3DIS. A gesture used to pick from a horizontal, waist high produce bin is different from a gesture used to pick from a high vertical shelf (Note: the panels over the horizontal bins in produce would run gesture patterns appropriate to that context. panels along the refrigerated aisle would run gesture patterns appropriate to that context, etc.) Someone who is using a wheel chair may pick using a tool. Once identified, the system can train on an individual's picking style, as it would any other, and recall that pattern from RMS when the individual is in the store to help reduce error rate and improve authentication. With regard to claims 2 and 11, Ha further discloses detecting a change in weight of the plate of the mobile robot (e.g. [0039] [0086]), wherein recognizing the product comprises recognizing the product, taken by the customer, based on the change in the weight of the plate (e.g. [0086]) and a result of an analysis of the image acquired via the camera sensor (e.g. [0087] In other embodiments, instead of, or in addition to, having the weight sensor 210, the robot 100 may also include an optical camera configured to view a spatial region above a food supporting surface associated with the top portion 114 (e.g., a spatial region above the tray 200). In such cases, the processing unit 130 may be configured to determine whether an item has been placed on the tray 200, or removed from the tray 200 based on an optical image obtained from the camera. For example, the camera may obtain an image of the spatial region above the tray 200 (while the tray 200 is not supporting any items) as a reference image. Such reference image may be stored in the non-transitory medium 152 in the robot 100. When an item has been placed on the tray 200, the camera captures an image, and transmits the image to the processing unit 130 for processing. The processing unit 130 may compare the image with the reference image stored in the non-transitory medium 152. In some embodiments, the comparing of the images by the processing unit 130 may be implemented by the processing unit 130 determining a correlation value between the two images. If the two images are different, the processing unit 130 may determine that an item has been placed on the tray 200.). With regard to claims 9 and 18, Ha discloses the mobile robot context but not payment registration explicitly. Ha services tables and provides table context as destinations. See ¶[0057], Fig.6. Ha supplies the trigger (item taken) to initiate charging. Turney registers and processes payments via mobile devices and POS, enabling users to pay items from a table ticket. See [0010]-[12], [0021], Fig.3; [0066] credit card on file, and used during payment process. Turney associates user/table/ticket, explicitly collecting table identifier. See [0007]-[11], [0018]. Turney sends payment instructions to POS and completes payment on the user’s registered method; provides confirmation. See [0011]-[12], Fig.3. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ha to include such payment features, as shown in Turney, as once the event is detected, invoking Turney’s payment request is a routine step to close the loop for unmanned checkout. Claims 3, 5-8, 12, 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ha, Turney, Reid, in further view of U.S. Pat. Pub. No. 2020/0401133 to Armbrust et al. (“Armbrust”). With regard to claims 3 and 12, Ha further discloses identifying the table related to the customer comprises identifying a table closest to the mobile robot when the customer has taken the product (Ha’s robot navigates among tables and knows destinations/positions; proximity to tables and closest purse informs service actions. See [0017], [0052], Fig.6.; Ha does not disclose identifying the table closest to the robot; Armbrust uses corner-mounted proximity sensors/LIDAR to localize and detect nearby objects/areas; can determine nearest table. See [0016]-[0017]; Fig.7, 8.). Armbrust at [0061] teaches “The autonomous cart 45 includes a time-of-flight laser scanner 140 as shown in FIGS. 2-4, 8-9 and 12. As discussed below, the laser scanner 140 creates constantly updated mapping data or a high-resolution image map 260′ (FIG. 14) of the surrounding work environment 260 (FIG. 13) for navigation and avoidance of fixed structures (such as walls, posts, support columns and staircases) and more permanent obstacles (such as furniture, workbenches and shelving units). Although the scanner 140 also detects temporary obstacles (such as workers walking by or packages temporarily placed on the floor), the processor 102 deletes these temporary obstacles from its environmental mapping data stored in its long-term memory 103.” (emphasis added). Therefore, one of ordinary skill in the restaurant robotics art before the effective filing date of the claimed invention would use proximity/localization to select the nearest table (taught by Armbrust) at the moment of item removal (see continuously at Ha) is an obvious refinement to robustly assign the charge when multiple tables are nearby. Further, as shown in [0061] of Armbrust, this allows the robot to not only detect, but avoid, and/or act accordingly, to the detected nearest table/workbench/shelving units, etc. With regard to claims 5 and 14, Ha discloses criteria/threshold-based decisions at a service stop (e.g., lapsed time / sensor conditions). See e.g. [0018]-[19], [0102], Fig.6. Armbrust teaches proximity sensing readily supports a distance threshold to the closest table; a proximity sensor detects objects within a proximity, or threhsold, of the sensor. See e.g. [0016]-[17]. Therefore, it would have been obvious to one of ordinary skill in the robotics art before the effective filing date of the claimed invention to modify Ha to include such proximity sensors that detect tables at a certain threshold proximal distance, as Applying a distance threshold to simplify logic (closest table wins) is an obvious optimization to reduce computation and errors. See Armbrust. See also Reid where the user is associated with the shopping cart they are in proximity to and the system has learned that is their cart/table, further combined with Ha above. With regard to claims 6 and 15, Ha discloses audio/speaker and control for service status; system can generate alerts upon criteria mismatch. See e.g. [0026], [0059]. Armbrust includes safety/status lights, audio speaker, and control panel for alerts. See ¶[0013], Fig.6-7. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Ha to include the alerts of Armbrust, as alerting on for instance inconsistent assignments is a safeguard to prevent mis-billing. See also Reid, situation with multiple shopping carts in store, the customer uses the first shopping cart/table for period of time, and then accidentally grabs a different second cart/table, places an item in the second shopping cart, then alert goes off at Reid [0057], the alert indicating that the second cart is not the first cart as they are different devices generating an alert. With regard to claims 7 and 16, Ha discloses multiple cameras, depth sensing and can integrate other sensors (laser device) for surroundings. See ¶[0010], ¶[0022]-[0024]. More particularly, Armbrust explicitly teaches a cabled array of proximity sensors around the cart’s periphery (higher/lower, angled up/down) to detect object approach and overhang—suitable for direction-of-removal. See ¶[0016], Fig.5A-5B. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Ha to include such proximity sensors using periphery proximity sensors to infer motion direction is a routine signal-processing application on mobile platforms as shown in Armbrust. See also Reid. With regard to claims 8 and 17, Ha further discloses optical cameras and processes optical images to detect events at the tray; hand motion direction is derivable from time-series images/point clouds. See [0006]-[09], [20], Fig. 4-5. Armbrust’s vision/proximity suite complements vision-based hand tracking, providing corroboration. See Fig.5B (sensor fusion concept). Hand-direction estimation from camera frames is well known in computer vision, image analysis; combining with Ha’s cameras is straightforward to resolve table attribution. See combination above. Response to Arguments Applicant's arguments filed 02/25/2026 have been fully considered but they are not persuasive. The examiner has withdrawn the previously-made rejections under 112. For the 101 rejection, Applicant argues that it is highly impractical for a human being to perform “autonomously moving a mobile robot” as recited in claim 1. The examiner refers to the definition of “autonomous.” From Collins English Dictionary 2012, William Collins & Sons, retrieved via www.dictionary.com (as attached), the word autonomous, in the “of a machine, device, etc.” arts, means “able to operate with little or no human control or intervention.” An examine is “an autonomous vehicle.” The examiner notes that under the BRI of autonomous, little human control is needed for such an operation. Even if no human intervention was required, the mere fact that something is automated is not sufficient to make it eligible. See MPEP 2106.05(a)(I), the area discussing Step 2A Prong 2, improvements to the functioning of a computer or to any other technology or technical field, the MPEP states Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential). It appears to the examiner that the improvement of the application lies in the billing process, not so much in any technology improvement of how the autonomous mobile robot moves, or anything related to the robot technology. See Applicant’s Spec: PNG media_image4.png 291 718 media_image4.png Greyscale “By using an indoor mobile robot including various sensors.” The system uses these generic indoor mobile robot with various sensors as tools to implement the underlying abstract idea. This is insufficient and is not an improvement that satisfies Step 2A Prong 2; or Step 2B. See an examples of what is known in the art at Applicant’s Spec page 2: PNG media_image5.png 366 746 media_image5.png Greyscale The mobile robot was well-understood, routine, and conventional as admitted by Applicant in this portion of the Spec. For the 103 rejection, Applicant argues that added limitations to claim 1 are distinguished over the prior art. The examiner respectfully disagrees as indicated above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Peter Ludwig whose telephone number is (571)270-5599. The examiner can normally be reached Mon-Fri 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fahd Obeid can be reached at 571-270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PETER LUDWIG/Primary Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Nov 08, 2023
Application Filed
Nov 11, 2025
Non-Final Rejection — §101, §103
Feb 25, 2026
Response Filed
Mar 12, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602678
CONFIGURABLE CORRECTIONAL FACILITY COMPUTER KIOSK SYSTEMS AND METHODS FOR PORTABLE ELECTRONIC DEVICE ACCESS AND MANAGEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12555086
SYSTEMS AND METHODS FOR A USER INTERFACE FOR MAKING RECOMMENDATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12518253
SYSTEM AND METHOD FOR E-RECEIPT PLATFORM
2y 5m to grant Granted Jan 06, 2026
Patent 12488321
SMART CONTRACT DEPLOYMENT FOR DCF TRUST SERVICES BILLING
2y 5m to grant Granted Dec 02, 2025
Patent 12475517
COMPUTER PROGRAM, METHOD, AND SYSTEM FOR AUTOMATED SAVINGS AND TIME-BASED MATCHING CONTRIBUTIONS
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
60%
With Interview (+24.6%)
4y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 540 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month