DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
This is in reply to communication filed on 04/26/2024.
Claims 1-12 are currently pending and have been examined.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more.
Step 1:
Claims 1-7 recite a method, which is directed to a process.
Claim 8 recite a system, which is directed to a machine.
Claim 9 recite a device, which is directed to a machine.
Claims 10-12 recite a kit (device), which is directed to a machine.
Therefore, each claim falls within one of the four statutory categories.
Step 2A, Prong 1 (Is a judicial exception recited?):
The independent claims 1, 8, 9 and 10 recite the abstract idea of providing consumers with a more convenient self-service checkout experience, see specification [0003]. This idea is described by the steps of
continuously detecting an occurrence of a checkout interruption event;
capture a plurality of sensing information, to recognize the plurality of sensing information, thereby generating the plurality of recognition results, and determining whether the checkout interruption event has occurred accordingly;
when the checkout interruption event occurs: provides a checkout operation, and forwarding the plurality of sensing information and the plurality of recognition results to a remote service; and
allow a user to conduct a real-time communication with the remote service
A) These claims recite a certain method of organizing human activity. The claims recite a certain method of organizing human activity as the above abstract idea limitations are directed to managing personal behavior or relationships or interactions between people. The examiner finds the claims to simply recites steps of commercial interaction as the claimed subject matter teaches a method and systems in which a computer provides consumers with a more convenient self-service checkout experience. The Examiner additionally finds the claims to be similar to an example the courts have identified as being a certain method of organizing human activity:
buySAFE, Inc. v. Google, Inc., 765 F.3d. 1350, 112 USPQ2d 1093 (Fed. Cir. 2014).
processing insurance claims for a covered loss or policy event under an insurance policy (i.e., an agreement in the form of a contract), Accenture Global Services v. Guidewire Software, Inc., 728 F.3d 1336, 1338-39, 108 USPQ2d 1173, 1175-76 (Fed. Cir. 2013).
Ultramercial, Inc. v. Hulu, LLC, 772 F.3d 709, 714-15, 112 USPQ2d 1750, 1753-54 (Fed. Cir. 2014). The patentee in Ultramercial claimed an eleven-step method for displaying an advertisement (ad) in exchange for access to copyrighted media, comprising steps of receiving copyrighted media, selecting an ad, offering the media in exchange for watching the selected ad, displaying the ad, allowing the consumer access to the media, and receiving payment from the sponsor of the ad. 772 F.3d. at 715, 112 USPQ2d at 1754.
B) These claims recite a certain method of organizing human activity. The claims recite a certain method of organizing human activity as the above abstract idea limitations are directed to managing personal behavior or relationships or interactions between people. The examiner finds the claims to simply recites steps of following rules or instructions to provide consumers with a more convenient self-service checkout experience. The Examiner additionally finds the claims to be similar to an example the courts have identified as being a certain method of organizing human activity:
filtering content, BASCOM Global Internet v. AT&T Mobility, LLC, 827 F.3d 1341, 1345-46, 119 USPQ2d 1236, 1239 (Fed. Cir. 2016) (finding that filtering content was an abstract idea under step 2A, but reversing an invalidity judgment of ineligibility due to an inadequate step 2B analysis).
considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018).
Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 126 USPQ2d 1498 (Fed. Cir. 2018).
Interval Licensing LLC, v. AOL, Inc., 896 F.3d 1335, 127 USPQ2d 1553 (Fed. Cir. 2018).
Step 2A, Prong 2 (Is the exception integrated into a practical application?):
This judicial exception is not integrated into a practical application because the claims satisfy the following criteria, which indicate that the claims do not integrate the abstract idea into practical application:
The claimed additional limitations are:
Claim 1: a self-service terminal, a control module, driving a plurality of sensors, executing a plurality of independent models, disabling a first display device, establishing a transmission connection with the remote service station, a second display device.
Claim 8: a self-service terminal, a control module, driving a plurality of sensors, executing a plurality of independent models, disabling a first display device, establishing a transmission connection with the remote service station, a second display device.
Claim 9: a control module, driving a plurality of sensors, executing a plurality of independent models, disabling a first display device, establishing a transmission connection with the remote service station, a second display device.
Claim 10: self-checkout kit, attachable to a checkout management device, a control module, integrated within the checkout management device or configured separately from but communicatively connected with the checkout management device, driving a plurality of sensors, executing a plurality of independent models, disabling a first display device, establishing a transmission connection with the remote service station, a second display device.
The additional limitations are directed to using a generic computer to process information and perform the abstract idea. Therefore, the limitations merely amount to adding the words “apply it” (or an equivalent) to the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
Step 2B (Does the claim recite additional elements that amount to significantly more that the judicial exception?):
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception.
As for Step 2B analysis, knowing the consideration is overlapping with Step 2A, Prong 2. The Step 2B considerations have already been substantially addressed under Step 2A Prong 2, see Step 2A Prong 2 analysis above. As discussed above, the additional imitations amount to adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f).
In addition, the dependent claims recite:
Step 2A, Prong 1 (Is a judicial exception recited?):
Dependent claims 2-7, and 11-12 recitations further narrowing the abstract idea recited in the independent claims 1, 8, 9 and 10 and therefore directed towards the same abstract idea.
Step 2A, Prong 2 and Step 2B:
The dependent claims 2-7, and 11-12 further narrow the abstract idea recited in the independent claims 1, 8, 9 and 10 and are therefore directed towards the same abstract idea.
The dependent claims recite the following additional limitations:
Claim 2: self-service terminal, the control module, driving the first display device, driving the second display device, the plurality of sensors, station,
Claim 3: the self-service terminal, the control module, image sensor(s), a gesture recognition independent model, an object recognition independent model, checkout assistance decision independent model,
Claim 4: the self-service terminal, a high-position image sensor, proximity sensor(s), information code sensor, an information code recognition independent model, the checkout assistance decision independent model,
Claim 5: the control module, selectively driving the first display device,
Claim 6: the checkout assistance decision independent model,
Claim 11: image sensor, proximity sensor(s), information code sensor, the control module, a gesture recognition independent model, an object recognition independent model, checkout assistance decision independent model,
Claim 12: the checkout management device,
However, the examiner finds each of these additional elements to be directed to merely “apply it” or applying a generic technology to perform the recited abstract idea of providing consumers with a more convenient self-service checkout experience, the recitation to the generic computer technology that is being used as a tool to execute the steps that define the abstract idea do not provide for integration at the 2nd prong and do not provide for significantly more at step 2B.
Therefore, the limitations on the invention of claims 1-12, when viewed individually and in ordered combination are directed to in-eligible subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 8-10 and 12 are rejected under 35 U.S.C 103 as being unpatentable over Kundu et al. (US 8462212 B1, hereinafter “Kundu”) in view of Johnson (US 20150058215 A1, hereinafter “Johnson”).
Regarding claims 1, 8, 9 and 10. Kundu discloses a self-checkout method, comprising:
at a self-service terminal (Kundu, col. 9-lines 5-10; “environment 300 … A transaction terminal 34 such as a point-of-sale terminal or cash register is under control of an operator 308 such as a store employee to allow the customer 305 to purchase and/or return the items 307”), executing following steps through a control module:
continuously detecting an occurrence of a checkout interruption event (Kundu, col.11-lines 19-21; “the monitoring system 155 can monitor one or more sources in scan environment 300 to detect occurrence of scan events occurring at checkout”);
driving a plurality of sensors to capture a plurality of sensing information, (Kundu, col.4-lines 34-38; “the second monitoring system can be configured to monitor inputs and/or outputs of one or more sources in the retail environment to detect occurrence of scan or other type of events occurring at checkout”)
executing a plurality of independent models to recognize the plurality of sensing information, thereby generating the plurality of recognition results, (Kundu, col. 6-lines 8-19; “the analyzer creates a table or mapping indicating a skew or time difference between the first clock and the second clock based on the heartbeat signals and/or related messaging between the analyzer and the second monitoring system … The corresponding corrected time stamp can then be used to accurately identify a particular time in the video where the event occurred”) and
determining whether the checkout interruption event has occurred accordingly; (Kundu, col. 4-lines 21-31; “a monitoring system configured to monitor events associated with scanning of one or more items at a scanner system. For example, embodiments herein include an analyzer system in communication with a first monitoring system and a second monitoring having different system clocks. The analyzer System can be configured to receive, from a first monitoring system, video frame information of an item being scanned at a scanner system. The analyzer can be configured to detect, via one or more communications from a second monitoring system, an occurrence of at least one event associated with scanning of the item at the scanner system”)
Kundu substantially discloses the claimed invention; however, Kundu fails to explicitly disclose the “when the checkout interruption event occurs: disabling a first display device that provides a checkout operation, and forwarding the plurality of sensing information and the plurality of recognition results to a remote service station; and establishing a transmission connection with the remote service station, to allow a user to conduct a real-time communication with the remote service station through a second display device”. However, Johnson teaches:
when the checkout interruption event occurs: disabling a first display device that provides a checkout operation, (Johnson, Fig. 5, [0093]; “In step 507, a customer may select help and/or language preferences at the VTS. In step 509, the VTS may disable various devices on the VTS to prevent customer intervention. In step 511, the VTS may display a “Please Hold for next available agent” message or a similar message”) and
forwarding the plurality of sensing information and the plurality of recognition results to a remote service station; (Johnson, Fig. 5, [0093]; “In step 513, a CTI service may find a next available agent using the customer's preferences”, [0043]; “the customer logs into the VTS (e.g., by swiping a financial institution card and entering a PIN), the customer's preferences associated with that account may be pulled from a preferences database … the routing of the communication link between the VTS and a video agent …where a video agent is automatically invoked”) and
establishing a transmission connection with the remote service station (Johnson, [0051]; “The environment may further include a contact center video session and routing function 208, which may provide for agent call routing and/or transfer functions, and/or for video session enablement and management functionality”), to allow a user to conduct a real-time communication with the remote service station through a second display device. (Johnson, Fig. 5, [0093]; “In step 523, the CTI service may set up video and/or voice channel(s) between the VTS and the agent's terminal. A video and/or voice channel connection may be established in step 525. In step 527, the VTS may agree to connect to the video agent via the established video and/or voice channels selected in steps 523 and 525. In step 529, a video and/or voice of the user may be displayed at the agent's terminal. The video agent may confirm a connection with the VTS/user and make contact with the user via the audio and video capabilities of the terminal and VTS. In step 533, the VTS may similarly display video and/or voice of the video agent. In step 535, the user may respond to the video agent's video and/or voice guidance (e.g., guidance to fix an error, complete a transaction using the VTS, provide additional transaction and/or customer information, and the like)”)
To provide the method and systems of Kundu with the ability to perform certain transactions that require interaction with, for example, a physical person would have been obvious to one of ordinary skill in the art before the effective filing date, in view of the teachings of Johnson, since all the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods (video transaction machine or system (VTS)) with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art before the effective filing date, i.e., one skilled in the art would have recognized that the point of sale, as used in Johnson, would allow the point of sale of Kundu to facilitate self-service transactions, see Johnson [0002].
Regarding claim 2. The combination of Kundu in view of Johnson disclose the self-checkout method as claimed in claim 1, further comprising:
at the self-service terminal, executing following steps through the control module: (see claim 1 rejection supra)
Kundu substantially discloses the claimed invention; however, Kundu fails to explicitly disclose the “driving the first display device to provide the user with the checkout operation; driving the second display device to display a checkout assistance information to the user; driving the plurality of sensors to capture the plurality of sensing information; transmitting the plurality of sensing information and the plurality of recognition results to a remote server; and when the checkout interruption event occurs: temporarily disabling the first display device to interrupt the checkout operation, and instructing the remote server to transmit the plurality of sensing information and the plurality of recognition results to the remote service station; and establishing the transmission connection with the remote service station, to enable the user to conduct a real-time audio and video streaming communication with the remote service station through the second display device”. However, Johnson teaches:
driving the first display device to provide the user with the checkout operation; (Johnson, Fig. 5, [0093]; “In step 505, the VTS 101 may display a welcome or attract loop, such as advertisements or a welcome banner. In step 507, a customer may select help and/or language preferences at the VTS”)
driving the second display device to display a checkout assistance information to the user; driving the plurality of sensors to capture the plurality of sensing information; (Johnson, Fig. 6, [0094]; “In step 601, the video agent may determine whether the user at the VTS 101 has a card (e.g., bank card, debit card, and the like) available for authentication. If so, the video agent may enter a predefined request 603 through the agent's terminal to enable the card reader. The VTS card reader may be enabled in in step 605. In step 607, a “Please Insert Card” or similar message may be displayed to the user … the VTS may display a “Please Enter Pin” or similar message. In step 617, the user may enter a PIN and press OK or a similar button”)
transmitting the plurality of sensing information and the plurality of recognition results to a remote server; (Johnson, [0087]; “During the course of the session, images may be created by the VTS 101 and stored in the Image Service 1125. These images may include scans of the customer's ID and checks presented. The VTA App Server 1105 may retrieve these images from the Image Service 1125 for display to the agent through the VTA App running in the browser on the agent's desktop”) and
when the checkout interruption event occurs: temporarily disabling the first display device to interrupt the checkout operation, and instructing the remote server to transmit the plurality of sensing information and the plurality of recognition results to the remote service station; (Johnson, Fig. 5, [0093]; “In step 513, a CTI service may find a next available agent using the customer's preferences”, [0043]; “the customer logs into the VTS (e.g., by swiping a financial institution card and entering a PIN), the customer's preferences associated with that account may be pulled from a preferences database … the routing of the communication link between the VTS and a video agent …where a video agent is automatically invoked”) and
establishing the transmission connection with the remote service station, to enable the user to conduct a real-time audio and video streaming communication with the remote service station through the second display device. (Johnson, Fig. 5, [0093]; “In step 523, the CTI service may set up video and/or voice channel(s) between the VTS and the agent's terminal. A video and/or voice channel connection may be established in step 525. In step 527, the VTS may agree to connect to the video agent via the established video and/or voice channels selected in steps 523 and 525. In step 529, a video and/or voice of the user may be displayed at the agent's terminal. The video agent may confirm a connection with the VTS/user and make contact with the user via the audio and video capabilities of the terminal and VTS. In step 533, the VTS may similarly display video and/or voice of the video agent. In step 535, the user may respond to the video agent's video and/or voice guidance (e.g., guidance to fix an error, complete a transaction using the VTS, provide additional transaction and/or customer information, and the like)”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Kundu to include driving the first display device to provide the user with the checkout operation; driving the second display device to display a checkout assistance information to the user; driving the plurality of sensors to capture the plurality of sensing information; transmitting the plurality of sensing information and the plurality of recognition results to a remote server; and when the checkout interruption event occurs: temporarily disabling the first display device to interrupt the checkout operation, and instructing the remote server to transmit the plurality of sensing information and the plurality of recognition results to the remote service station; and establishing the transmission connection with the remote service station, to enable the user to conduct a real-time audio and video streaming communication with the remote service station through the second display device, as taught by Johnson, where this would be performed in order to provide automated machines with functionality to perform certain transactions that require interaction with, for example, a physical person, see Johnson [0002].
Regarding claim 3. The combination of Kundu in view of Johnson disclose the self-checkout method as claimed in claim 2, further comprising:
at the self-service terminal, executing the following steps through the control module: (see claim 1 rejection supra)
driving a first image sensor to capture a first image sensing information containing a hand or an information code; (Kundu, col. 9-lines 51-56; “after logging in, the operator 308 can begin selecting items 307 from input region 302-1, such as by picking up the individual items 307 by hand. The operator 308 passes each item 307 from the item input region 302-1 over the Scanner system 36 generally located within an item read region 302-2”)
driving a second image sensor to capture a second image sensing information containing a product; executing a gesture recognition independent model to recognize the first image sensing information and generating a first recognition result accordingly; (Kundu, col. 5-lines 4-10; “the second monitoring system can monitor motion of a hand across a scan window. The detected motion can correspond to passing of an item over or passed a scan window of the scanner system and placing of an item in a shopping bag. Upon detection of such a motion, the second monitoring system can generate a notification of the motion event and forward the event notification (e.g., motion of the hand) to the analyzer system”)
executing an object recognition independent model to recognize the second image sensing information and generating a third recognition result accordingly; inputting the first and third recognition results into a checkout assistance decision independent model and executing the checkout assistance decision independent model to determine whether a missed scan event has occurred accordingly; (Kundu, col.5-lines 35 to col.6-line 30; “The monitoring system according to embodiments herein can monitor any number of one or more suitable sources (in addition to those as discussed above) for detecting occurrence of different types of events in the scan environment … the events and corresponding video can be reviewed by an automated system that analyzes pixels of the video to identify possible fraudulent activity such as when a customer or store employee passes one or more items around the scanner (or RFID reader) without being scanned”)
and when a missed scan event occurs: determining that a checkout interruption event has occurred. (Kundu, col. 6; lines 26-35; “the events and corresponding video can be reviewed by an automated system … to identify whether an abuse has occurred”)
Regarding claim 12. The combination of Kundu in view of Johnson disclose the self-checkout kit as claimed in claim 10, wherein the checkout management device is a point of sale machine or a cash register. (Kundu, col. 1-lines 22-24; “Retail establishments commonly utilize point of sale or other transaction terminals, such as cash registers, to allow customers of those establishments to purchase items”)
Claims 4-5 and 11 are rejected under 35 U.S.C 103 as being unpatentable over Kundu in view of Johnson further in view of Debucean et al. (US 20210216785 A1, hereinafter “Debucean”).
Regarding claims 4 and 11. The combination of Kundu in view of Johnson disclose the self-checkout method as claimed in claim 3, further comprising:
at the self-service terminal, executing the following steps through the control module: (see claim 1 rejection supra)
The combination of Kundu in view of Johnson substantially discloses the claimed invention; however, the combination fails to explicitly disclose the “driving a high-position image sensor to capture a third image sensing information containing the user, the hand, the product, or the information code; driving a first proximity sensor to detect a first distance information with respect to the hand, the product, or the information code, and obtaining a first sensing information accordingly; driving a second proximity sensor to detect a second distance information with respect to the hand, the product, or the information code, and obtaining a second sensing information accordingly; driving a third proximity sensor to detect a third distance information with respect to the hand, the product, or the information code, and obtaining a third sensing information accordingly; driving an information code sensor to read the information code, and obtaining a fourth sensing information accordingly; executing an information code recognition independent model based on the first image sensing information to produce a second recognition result; inputting the first to third recognition results, the first to third image sensing information, and the first to fourth sensing information into the checkout assistance decision independent model, executing the checkout assistance decision independent model to recognize a consuming behavior for the user and determining whether a missed scan event has occurred; and when a missed scan event occurs: determining that a checkout interruption event has occurred”. However, Debucean teaches:
driving a high-position image sensor (Debucean, [0021]; “an SCO terminal 100 configured to enable a customer to scan and bill one or more objects present in their shopping cart. The SCO terminal 100 includes a scanner 101, a video camera 102, first and second proximity sensors 104a and 104b”) to capture a third image sensing information containing the user, the hand, the product, or the information code; (Debucean, [0049]; “the ANN 306 is trained to establish an internal representation of a relationship between features of a viewed image/video frame and the probability of the image/video frame representing an “Empty hand”, “Hand with object” or “No hand” scenario”)
driving a first proximity sensor (Debucean, [0021]; “an SCO terminal 100 configured to enable a customer to scan and bill one or more objects present in their shopping cart. The SCO terminal 100 includes a scanner 101, a video camera 102, first and second proximity sensors 104a and 104b”) to detect a first distance information with respect to the hand, the product, or the information code, and obtaining a first sensing information accordingly; driving a second proximity sensor (Debucean, [0021]; “an SCO terminal 100 configured to enable a customer to scan and bill one or more objects present in their shopping cart. The SCO terminal 100 includes a scanner 101, a video camera 102, first and second proximity sensors 104a and 104b”) to detect a second distance information with respect to the hand, the product, or the information code, and obtaining a second sensing information accordingly; driving a third proximity sensor to detect a third distance information with respect to the hand, the product, or the information code, and obtaining a third sensing information accordingly; (Debucean, [0024]; “Each of the first and second proximity sensors 104a and 104b are disposed proximally to the video camera 102 and are configured to detect the presence of nearby objects without requiring physical contact therewith”)
driving an information code sensor to read the information code, and obtaining a fourth sensing information accordingly; (Debucean, [0021]; “Referring to FIG. 1A, there is shown an SCO terminal 100 configured to enable a customer to scan and bill one or more objects present in their shopping cart … the scanner 101 includes a table-mounted bar code scanner for enabling a customer to scan Universal Product Codes (UPC) of one or more objects, and is arranged in a horizontal orientation relative to the user”)
executing an information code recognition independent model based on the first image sensing information to produce a second recognition result; (Debucean, [0028]; “When the scanner 101 scans a UPC label of an object, it generates Point of Sale (POS) data. The POS data includes a plurality of SCO-related variables which enable detection of a scanning incident and its timing, together with an identifier of the object scanned”)
inputting the first to third recognition results, the first to third image sensing information, and the first to fourth sensing information into the checkout assistance decision independent model, executing the checkout assistance decision independent model to recognize a consuming behavior for the user and determining whether a missed scan event has occurred; and when a missed scan event occurs: determining that a checkout interruption event has occurred. (Debucean, [0056]; “A processor 307 in the processing unit 305, receives the first, second, and third output values n1, n2 and n3 from the ANN 306, and POS data from the scanner 101 … The processor 307 is configured to process the processing unit input variables to detect a correlation therebetween the video data and the POS data, and to generate two or more binary output variables designating the occurrence of a scan event or a non-scan event”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Kundu to include driving a high-position image sensor to capture a third image sensing information containing the user, the hand, the product, or the information code; driving a first proximity sensor to detect a first distance information with respect to the hand, the product, or the information code, and obtaining a first sensing information accordingly; driving a second proximity sensor to detect a second distance information with respect to the hand, the product, or the information code, and obtaining a second sensing information accordingly; driving a third proximity sensor to detect a third distance information with respect to the hand, the product, or the information code, and obtaining a third sensing information accordingly; driving an information code sensor to read the information code, and obtaining a fourth sensing information accordingly; executing an information code recognition independent model based on the first image sensing information to produce a second recognition result; inputting the first to third recognition results, the first to third image sensing information, and the first to fourth sensing information into the checkout assistance decision independent model, executing the checkout assistance decision independent model to recognize a consuming behavior for the user and determining whether a missed scan event has occurred; and when a missed scan event occurs: determining that a checkout interruption event has occurred, as taught by Debucean, where this would be performed in order to identifying and alerting of occurrences of products not being scanned at a self-checkout counter, see Debucean [0002].
Regarding claim 5. The combination of Kundu in view of Johnson disclose the self-checkout method as claimed in claim 4, further comprising:
executing one of the following steps through the control module: (see claim 1 rejection supra)
Kundu substantially discloses the claimed invention; however, Kundu fails to explicitly disclose the “selectively driving the first display device to generate a touch based remote assistance call button; and detecting whether a physical remote assistance call button or the touch based remote assistance call button has been pressed”. However, Johnson teaches:
selectively driving the first display device to generate a touch based remote assistance call button; and detecting whether a physical remote assistance call button or the touch based remote assistance call button has been pressed. (Johnson, [0102]; “The user may enter the VTS flow (instead of the ATM flow), such as by pressing a video agent assistance button on a display at the VTS 101”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Kundu to include selectively driving the first display device to generate a touch based remote assistance call button; and detecting whether a physical remote assistance call button or the touch based remote assistance call button has been pressed, as taught by Johnson, where this would be performed in order to provide automated machines with functionality to perform certain transactions that require interaction with, for example, a physical person, see Johnson [0002].
Claims 6-7 are rejected under 35 U.S.C 103 as being unpatentable over Kundu in view of Johnson further in view of Debucean furthermore in view of Wen et al. (US 20210183212 A1, hereinafter “Wen”).
Regarding claim 6. The combination of Kundu in view of Johnson disclose the self-checkout method as claimed in claim 5, wherein the checkout assistance decision independent model is configured to execute one of the following rules to determine whether the checkout interruption event has occurred: (see claim 1 rejection supra)
Kundu substantially discloses the claimed invention; however, Kundu fails to explicitly disclose the “determining that a first type remote assistance call event has occurred, when the physical remote assistance call button is pressed; determining that a second type remote assistance call event has occurred, when the touch based remote assistance call button is pressed”. However, Johnson teaches:
determining that a first type remote assistance call event has occurred, when the physical remote assistance call button is pressed; determining that a second type remote assistance call event has occurred, when the touch based remote assistance call button is pressed; (Johnson, [0093]; “FIG. 5 is a flow chart of an example video/voice session initiation process that may be performed by, e.g., the system of FIG. 1. In step 505, the VTS 101 may display a welcome or attract loop, such as advertisements or a welcome banner. In step 507, a customer may select help and/or language preferences at the VTS”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Kundu to include determining that a first type remote assistance call event has occurred, when the physical remote assistance call button is pressed; determining that a second type remote assistance call event has occurred, when the touch based remote assistance call button is pressed, as taught by Johnson, where this would be performed in order to provide automated machines with functionality to perform certain transactions that require interaction with, for example, a physical person, see Johnson [0002].
The combination of Kundu in view of Johnson substantially discloses the claimed invention; however, the combination fails to explicitly disclose the “determining that a missed scan event has occurred, when the third recognition result indicates that the detected quantity of products is greater than the quantity of products successfully scanned by the information code sensor; and determining that a checkout anomaly event has occurred, when the third recognition result indicates that the successfully detected product has disappeared”. However, Wen teaches:
determining that a missed scan event has occurred, when the third recognition result indicates that the detected quantity of products is greater than the quantity of products successfully scanned by the information code sensor; and determining that a checkout anomaly event has occurred, when the third recognition result indicates that the successfully detected product has disappeared. (Wen, [0094-0101]; “Step 403: Determine a quantity of skipped scans of the user according to the detection result in response to an operation event that the user confirms that all the items are scanned … If the quantity of skipped scans of the user is greater than or equal to the first preset threshold, settlement on the user may be obstructed. The settlement-forbidden interface is displayed, and the warning information may be further sent to the on-site monitoring terminal to alert an on-site monitoring person … Step 405: Settle the items scanned by the user in response to an operation event of the on-site monitoring person”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Kundu to include determining that a missed scan event has occurred, when the third recognition result indicates that the detected quantity of products is greater than the quantity of products successfully scanned by the information code sensor; and determining that a checkout anomaly event has occurred, when the third recognition result indicates that the successfully detected product has disappeared, as taught by Wen, where this would be performed in order to pay in a self-service manner, thus saving the queuing process, and bringing great convenience to the customers, see Wen [0003].
Regarding claim 7. The combination of Kundu in view of Johnson disclose the self-checkout method as claimed in claim 6, wherein
Kundu substantially discloses the claimed invention; however, Kundu fails to explicitly disclose the “the checkout interruption event includes one of the first type remote assistance call event, the second type remote assistance call event, the missed scan event, and the checkout anomaly event”. However, Johnson teaches:
the checkout interruption event includes one of the first type remote assistance call event, the second type remote assistance call event, the missed scan event, and the checkout anomaly event. (Johnson, [0052]; “A services framework 350, which may be part of a larger system outside of VTS 101 (e.g., part of network 102, ATM host 103, and/or VTS/CC client 104) may include, for example, a contact center services framework 309, an image processing services framework 310, a financial/settlement/servicing services framework 311, an offers/advertisement services framework 312, a profile/preferences services framework 313, a VTS monitoring/control services framework 314, and/or an unauthorized activity services framework 316”)
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify Kundu to include the checkout interruption event includes one of the first type remote assistance call event, the second type remote assistance call event, the missed scan event, and the checkout anomaly event, as taught by Johnson, where this would be performed in order to provide automated machines with functionality to perform certain transactions that require interaction with, for example, a physical person, see Johnson [0002].
Conclusion
1. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AVIA SALMAN whose telephone number is (313)446-4901. The examiner can normally be reached Monday thru Friday; 9:00 AM to 5:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, FAHD OBEID can be reached at (571) 270-3324. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AVIA SALMAN/Primary Patent Examiner, Art Unit 3627