DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment & Arguments
Applicant’s amendment and arguments filed 27 January 2026 with respect to claims 1,3-9 and 11-17 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. The examiner acknowledges the discussion in the interview conducted with Applicant’s representative on 12 January 2026. Based on a further search conducted to fully consider the amended claims and Applicant’s arguments, additional prior art was found as applied below. Also, the Examiner’s arguments in the Office action mailed 16 October 2025 are incorporated herein by reference. The examiner also acknowledges Applicant’s argument that Buibas does not analyze the transition of a process of behavior (e.g., from interest to comparison) based skeletal information to infer psychological states, and that Suetsugi does not analyze the skeletal-level behavioral transition and the specific real-time check against a payment machine's registration. But these aspects of the claims are obvious when considering the combination of prior art as applied below. And the examiner acknowledges Applicant’s argument that neither Walker nor Popa teaches using skeleton information to identify a transition of a behavior process, or determining if an item of interest was not registered at a specific payment machine to trigger transmission of product related information thereto. These and the prior arguments consider each individual prior art references in isolation. Again, aspects of the claims argued are obvious when considering the combination of prior art as applied below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-9, and 11-15 are rejected under 35 U.S.C. 103 as being unpatentable over Suetsugi (US 2021/233103 A1) in view of Buibas et al (US 10373322 B1), Popa et al (“ASSESSMENT OF CUSTOMERS’ LEVEL OF INTEREST”), and Walker et al (US 8781894 B2) and Rosenbaum et al (US 20200341580 A1).
Referring to claim 1:
Suetsugi discloses a distribution program that causes a computer to execute a process, the process comprising:
extracting a video of a first area in a store, the video including a person and a product (captured image collector 41 collects captured picture images provided from the camera 15, par. [0088]);
identifying, by inputting the video of a first area in a store into an analyzer, a behavior that is performed by the person with respect to the product in the store (action analyzer 44 detects actions of persons in picture images captured by the camera 15 and acquires in-store action information on actions of customers in the store, par. [0091]);
identifying a first behavior type that is led by the behavior that is performed by the person with respect to the product among a plurality of behavior types that define transition of a process of the behavior for a product in the store (action analyzer 44 detects an event where a customer makes a stop in front of a shelf, and also detects an event where the customer performs actions such as taking a product from the shelf, returning a product to the shelf, and checking a product for selection, par. [0091]);
identifying a payment machine which the person uses from a video of a second area including the payment machine in the store (electronic pointof-purchase displays ... in front of or near the target customer to thereby cause them to display it, in response to the target customer's location information analyzed by the image analysis server, par. [0123]); and
when the identified first behavior type is at a predetermined level or higher and the product indicating the first behavior type is not included in products registered at the identified payment machine, transmitting information corresponding to the product indicating the first behavior type to the identified payment machine (customer information deliverer 92 may be configured to deliver a discounted price of a recommended product to electronic point-of-purchase displays, par. [0123], which inherently must be triggered at a threshold of customer behavior or interaction with respect to the product that is not registered at customer check-out).
Suetsugi does not disclose (as underlined) identifying, by inputting the video of a first area in a store into a trained machine learning model, a behavior that is performed by the person with respect to the product in the store. There is no explicit implementation provided for how the action analyzer 44 detects an event where a customer makes a stop in front of a shelf, and also detects an event where the customer performs actions such as taking a product from the shelf, returning a product to the shelf, and checking a product for selection (par. [0091]).
However, the feature of using a trained machine learning model is a common and well-known solution which a person of ordinary skill in the art would select, depending on the circumstances, without exercising inventive skill, in order to identify or predict a behavior that is performed by a person with respect to a product in a store. For example, the action predictor 83 of Suetsugi, which can also be part of delivering a discounted price of a recommended product to an electronic point-of-purchase display, is explicitly implemented through a model created by using machine learning (par. [0114]). In any event, Buibas et al disclose inputting video from a store into a trained machine learning model (neural network) to identify behavior performed by a person with respect to a product in the store (col. 9, line 21 to col. 10, line 57).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Suetsugi in view of Buibas et al to identify a behavior that is performed by the person with respect to the product in the store by inputting the video of a first area in a store into a trained machine learning model in order to detect human behavior by analyzing large amounts of data quickly and identifying complex patterns that might be difficult to otherwise discern, and allowing for more accurate and nuanced behavioral identification, particularly when dealing with large datasets and real-time monitoring.
Suetsugi also does not disclose identifying a skeletal position of the person by inputting the video of an area in a store into a trained machine learning model identifying the behavior that is performed by the person with respect to the product in the store based on the skeletal position relative to a position the product. However, Buibas et al disclose detection and tracking of people and their interactions with items, specifically skeletal tracking and analysis to classify physical actions (see description of Figs. 5-8), and identifying a behavior based on skeletal position relative to a product's position by using a neural network trained to recognize items from changes across images and classifying an action performed on an item into classes comprising taking, putting, or moving (par. 14 and description of Figs. 3-4).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have further modified Suetsugi in view of Buibas et al to have included the feature of identifying a skeletal position of the person by inputting the video of a first area in a store into the trained machine learning model and identifying the behavior that is performed by the person with respect to the product in the store based on the skeletal position relative to a position the product in order to improve the accuracy of identifying the behavior of a person with respect to the product in the store.
Suetsugi in combination with Buibas et al do not teach (as underlined) the further feature of identifying skeleton information of the person by inputting the video into a trained machine learning model, the skeleton information corresponding to a specific skeleton model, the skeleton information including definition information of joints, coordinate information of joints and angle information between joints, thereby identifying the behavior that is performed by the person based on a coordinate of joint included in the skeleton information. However, Rosenbaum et al disclose a skeletal model having information including definitions of joints, coordinates of joints, and angle information between joints (abstract, par. 19, 22: defined attributes of skeletal model are mapped to defined positions, the attributes representing orientations, positions, and angles), thereby identifying behaviors (detecting and classifying gestures) performed by a person based on a coordinate of joint included in the skeleton information (abstract, par. 14-17: a person’s gestures are recognized by evaluating a neural network using the input of attributes defined by the skeletal model).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Suetsugi and Buibas et al in view of Rosenbaum et al whereby the skeleton information corresponding to a specific skeleton model, the skeleton information including definition information of joints, coordinate information of joints and angle information between joints, thereby identifying the behavior that is performed by the person based on a coordinate of joint included in the skeleton information. Such a modification provides improvements to gesture recognition based on skeletal models, and further conveys certain advantages, such as allowing one skeletal model to be used to classify various types of gestures using separately trained neural networks, each generated from a different mapping (par. 2, 58 in Rosenbaum et al).
The combination of Suetsugi, Buibas et al, and Rosenbaum et al do not disclose (as underlined) transmitting information corresponding to the product when a level of the identified first behavior type corresponding to an interest, a request, and a comparison with respect to the product is at a predetermined level or higher. There is no explicit description of these types of behavior with respect to a product being detected by or for the action analyzer 44.
However, Popa et al disclose a system for analyzing human behavior patterns related to products interaction, which could reveal the customer’s level of interest. Basic customer actions are detected such as: browsing through a set of products, picking a product up, putting it back on the shelf, checking the product characteristics, fitting a product next to himself or herself, or trying on an item. High-level features representative for the customers’ level of interest are extracted by analyzing the sequence of shopping related actions, their duration and also their repeatability during each shopping event. The output of this analysis is relevant at the reasoning level about the different shopping behavioral types (Section 1: Introduction). Various levels of interest are also described (Section 3.1: Customers’ Levels of Interest).
Furthermore, Walker et al disclose a system to provide price adjustments based on indicated product interest that includes reception of an indication from a customer of interest in various product, determination of a price adjustment for a product based on the indication of relative interest in different products, and transmission of the price adjustment to the customer. The indication of interest in the product comprises one or more of the following: a request for information regarding the product, in which the request identifies the product, an indication that the customer is evaluating the product in the retail store, an indication that the customer is physically near the product in the retail store, and an indication that the customer is physically near the product in the retail store for a predetermined amount of time.
Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Suetsugi, Buibas et al, and Rosenbaum et al in view of Popa et al and Walker et al to transmit information corresponding to the product when a level of the identified first behavior type corresponding to an interest, a request, and a comparison with respect to the product is at a predetermined level or higher in order to enhance the customers’ shopping experience and allow retailers the opportunity to offer a price adjustment that may incent the customer to purchase a product that would not otherwise be purchased, thereby potentially increasing a total profit received by the retailer.
Referring to claims 8 and 9:
These are the method and apparatus claims with the steps and elements/functions corresponding to the process executed by a computer as set for in claim 1 and are therefore rejected for the same reasons as presented above.
Referring to claims 3-7 and 11-15:
Suetsugi teaches or implies determining behavior process transitions, identifying products to be purchased by the person and whether an identified behavior type is associated with the identified products, determining distribution of a coupon corresponding to the product indicating the first behavior type when the product indicating the first behavior type is not included in the identified products, identifying a specific person and a product on a storage shelf indicating the first behavior type, identifying a payment machine at which the specific person performs payment, transmitting a coupon and an advertisement for the product indicating the behavior type or carried by the person at the payment machine based on purchase history information, and when a plurality of products indicating the first behavior type are present, distributing one of coupons for a plurality of products based on prices of the plurality of products (e.g. par. [0120]-[0123]).
Claims 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Suetsugi, Buibas et al, Rosenbaum et al, Popa et al, and Walker et al as applied to claims 1 and 9 above, and further in view of Munthiu (“The buying decision process and types of buying decision behavior”).
Referring to claims 16-17:
While the combination of Suetsugi, Buibas et al, Rosenbaum et al, Popa et al, and Walker et al do not specifically describe a plurality of behavior types as corresponding to stages in a purchase psychological process, human behavior, such as actions or interactions with respect to interest in an object or a product, is conventionally defined and understood to be a psychological process. And models of the purchase psychological process are well-known such as that proposed by Elias St. Elmo Lewis in 1898, known as the Attention, Interest, Desire, and Action (AIDA) model, which is a universal and timeless model still in use today, as well as many variations, challenges, and reframing with a vast amount of research and literature having been produced on the subject.
Munthiu, referred to here, teaches that stages of a psychological processes play an important role in understanding how a consumer’s buying decision process takes place and that smart companies should investigate and understand this process at the deepest level possible. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have considered in the combination of Suetsugi and Buibas et al, Rosenbaum et al, Popa et al and Walker et al, in view of Munthiu, the plurality of behavior types as corresponding to stages in a purchase psychological process whereby an understanding of these behaviors provides a means for a company (the store) to improve sales.
Cited Art
The prior art and other references made of record and not relied upon are considered pertinent to applicant's disclosure.
Tanabiki et al (US 9355305 B2) disclose a posture estimation device for estimating a wide variety of 3-dimensional postures by using a skeletal model. The posture estimation device (200) has: a skeletal backbone estimation unit (230) for estimating the position of a feature location of a person within an acquired image; a location extraction unit (240) which generates a likelihood map indicating the certainty that a location other than the feature location of the person exists in the acquired image based on the position of the feature location of the person; and a skeletal model evaluation unit (250) for evaluating, based on the likelihood map, a candidate group which includes a plurality of 2-dimensional skeletal models as candidates and such that each 2-dimensional skeletal model is configured from a line group representing each location and a point group representing coupling between each location and corresponds to one posture of the person.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Scott Rogers whose telephone number is 571-272-7467. The examiner can normally be reached 8 am to 7 pm flex.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Abderrahim Merouan can be reached on 571-270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Scott A Rogers/
Primary Examiner, Art Unit 2683
23 February 2026