Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/29/2025 has been entered.
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Response to Amendment
Applicant’s arguments and claim amendments, see P. 11- P. 13, filed 12/29/2025, with respect to claims 1 and 9-10 have been fully considered but are not found convincing. The 35 U.S.C. 101 abstract idea rejection will be restated later in the rejection.
Examiner first reminds the applicant that claims 7 and 16 are NOT rejected under 35 U.S.C. 101 abstract idea rejection.
For the referencing to August 4, 2025, USPTO Memorandum, Examiner reminds the applicant that all claims must be analyzed as a whole and under Broadest Reasonable Interpretation. The claims should not be interpreted only based the explanation within the specification. Examiner agrees that, with what is stated within the Paragraphs [0025]-[0030], a person cannot analyze a video with specific definition of “processing high-frequency pixel data and coordinate transformations” and “extract skeletal features and compute transition probabilities in real-time”. However, analyzing video image, under BRI, includes advancing the video frame by frame and analyze each frame slowly. Without incorporating what is stated in the specification, examiner does not agree that “analyzing the video image” must be exactly the same as what is stated in the specification.
For referencing the Example 47, Claim 3 of the July 2024 Guidance, claim 3 of example 47 preempt that the whole claim is under using an ANN. In the current independent claims, only “specifying a behavior exhibited by the tracked person in the inside of the store” is done with a machine learning model. Everything before and after this limitation does not have to be done by a machine learning model. In addition, a “machine-learning-transformed data structure” includes outputting English. As such, the machine learning can output human readable language for further steps.
For the referencing to the December 5, 2025 USPTO Memorandum, examiner will point to the application that lead to the Memorandum, 16/319,040 (US 2019/0236482 A1) Application 16//319,040 included the “training” of machine learning model. In particular, the independent claims of 16/379,040 points to the “training” of machine learning model.
The independent claims of current application as a whole is not about “training” a machine learning model. The claims are using a “trained” model, meaning the “training” is done outside the scope of the independent claims. Under BRI, using a “trained” model includes using the model directly without training.
However, as shown in the previous final rejection and restated above, claims 7 and 16 are NOT REJECTED under 35 U.S.C. 101 abstract idea rejection because of their nature of “training” a machine learning model.
In addition, examiner points to the previous reasoning that only “specifying a behavior exhibited by the tracked person in the inside of the store” is done with a machine learning model. Whether the behavior is associated to the behavior types of purchase psychological process, determining if a person have left an area, and specifying based on the first “behavior type” if a person have purchased a product are all outside the scope of the machine learning model as far as the current claim stands. It is unclear how using the “trained” machine learning model on one of the multiple steps of the whole claim with a broad meaning of “a behavior” improves the stated inaccuracy that is not exactly specified within the specification.
As such, the 35 U.S.C. 101 abstract idea rejection will be restated below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-6, 8-15, and 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The limitations, under their broadest reasonable interpretation, cover mental processes (concepts performed in a human mind, including as an observation, evaluation, judgment, opinion, organizing human activity and/or mathematical concepts and calculations). The independent claims 1, 9, and 10 recite a non-transitory computer readable recording medium and an apparatus for analyzing purchasing behaviors of customers. This judicial exception is not integrated into a practical application because the steps do not add meaningful limitations to be considered specifically applied to a particular technological problem to be solved .The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the steps of the claimed invention can be done mentally and no additional features in the claims would preclude them from being performed as such except for the generic computer elements at high level of generality (i.e., processor, memory).
According to the USPTO guidelines, a claim is directed to non-statutory subject matter if:
STEP 1: the claim does not fall within one of the four statutory categories of invention (process, machine, manufacture or composition of matter), or
STEP 2: the claim recites a judicial exception, e.g. an abstract idea, without reciting additional elements that amount to significantly more than the judicial exception, as determined using the following analysis:
STEP 2A (PRONG 1): Does the claim recite an abstract idea, law of nature, or natural phenomenon?
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application?
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception?
Using the two-step inquiry, it is clear that the independent claims 1, 9, and 10 are directed to an abstract idea as shown below:
STEP 1: Do the claims fall within one of the statutory categories? YES. Independent claims 1, 9, and 10 are directed to a non-transitory computer readable recording medium and an apparatus for analyzing purchasing behaviors of customers.
STEP 2A (PRONG 1): Is the claim directed to a law of nature, a natural phenomenon or an abstract idea? YES, the claims are directed toward a mental processes and/or mathematical concepts (i.e. abstract idea).
With regard to STEP 2A (PRONG 1), the guidelines provide three groupings of subject matter that are considered abstract ideas:
Mathematical concepts – mathematical relationships, mathematical formulas or equations, mathematical calculations;
Certain methods of organizing human activity – fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions); and
Mental processes – concepts that are practicably performed in the human mind (including an observation, evaluation, judgment, opinion).
Independent claims 1, 9, and 10 comprise mental processes and/or mathematical concepts that can be practicably performed in the human mind (or generic computers or components configured to perform the method) and, therefore, an abstract idea.
extracting a person from a video image in which a predetermined area in an inside of a store is captured; (The step of recognizing a person in a video falls into the “mental processes” grouping of abstract ideas because recognizing a person in a video can be performed in the human mind as an observation, evaluation, judgement or opinion. A person can recognize a specific movie star when watching a movie.)
tracking the extracted person by analyzing the video image; (The step of tracking a person in a video falls into the “mental processes” grouping of abstract ideas because tracking a person in a video can be performed in the human mind as an observation, evaluation, judgement or opinion. A security guard can track the movement of a customer through the surveillance camera video footage in a store.)
specifying a behavior exhibited by the tracked person in the inside of the store by inputting the video image into a trained machine learning model; (The step of recognizing a behavior of a person falls into the “mental processes” grouping of abstract ideas because recognizing a behavior of a person can be performed in the human mind as an observation, evaluation, judgement or opinion. A store employee recognize that a customer has picked up a product.)
specifying a first behavior type that is reached by the behavior exhibited by the tracked person and that is associated with a purchase psychological process for a commodity product in the store from among a plurality of behavior types in each of which a transition of processes of the behaviors for a commodity product in the inside of the store is defined; (The step of recognizing a behavior of a person throughout the process of purchasing a product falls into the “mental processes” grouping of abstract ideas because recognizing a behavior of a person throughout the process of purchasing a product can be performed in the human mind as an observation, evaluation, judgement or opinion. A store employee recognize that a customer has picked up a product, looked at the label, put the item into the shopping cart and varies other behavior that is a part of the process of purchasing a product from a store. In addition, a store employee can inference that the customer was not satisfy with the product he/she just picked up. Examiner notes that mental process, according to 2106.04(a) Abstract Ideas, includes observation, evaluation, judgment, opinion.)
determining whether or not the tracked person has moved to outside a predetermined area; (The step of determining if a person have moved outside of an area falls into the “mental processes” grouping of abstract ideas because determining if a person have moved outside of an area can be performed in the human mind as an observation, evaluation, judgement or opinion. A doorman can determine that a guest have moved through the door and close the door promptly.)
specifying, based on the first behavior type, when it is determined that the tracked person has moved to outside the predetermined area, whether the tracked person has purchased the commodity product or has left without purchasing the commodity product. (The step of determining if a person has purchased a product or not falls into the “mental processes” grouping of abstract ideas because determining if a person has purchased a product or not can be performed in the human mind as an observation, evaluation, judgement or opinion. A store employee can observe if a customer have stopped around the check-out area to purchase a product or directly walked out of a store without purchasing anything.)
These limitations, as drafted, is a simple process that, under their broadest reasonable interpretation, covers performance of the limitations in the mind or by a human. The Examiner notes that under MPEP 2106.04(a)(2)(III), the courts consider a mental process (thinking) that “can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
As such, a person could mentally track a customer around the store and determine if the customer have purchased a product or left without buying anything through observing the customer’s behavior around the store. The mere nominal recitation that the various steps are being executed by a processor does not take the limitations out of the mental process and/or mathematical concepts groupings. Thus, the claims recite a mental process.
STEP 2A (PRONG 2): Does the claim recite additional elements that integrate the judicial exception into a practical application? NO, the claims do not recite additional elements that integrate the judicial exception into a practical application.
With regard to STEP 2A (prong 2), whether the claim recites additional elements that integrate the judicial exception into a practical application, the guidelines provide the following exemplary considerations that are indicative that an additional element (or combination of elements) may have integrated the judicial exception into a practical application:
an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field;
an additional element that applies or uses a judicial exception to affect a particular treatment or prophylaxis for a disease or medical condition;
an additional element implements a judicial exception with, or uses a judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim;
an additional element effects a transformation or reduction of a particular article to a different state or thing; and
an additional element applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception.
While the guidelines further state that the exemplary considerations are not an exhaustive list and that there may be other examples of integrating the exception into a practical application, the guidelines also list examples in which a judicial exception has not been integrated into a practical application:
an additional element merely recites the words “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea;
an additional element adds insignificant extra-solution activity to the judicial exception; and
an additional element does no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Independent claims 1, 9, and 10 do not recite any of the exemplary considerations that are indicative of an abstract idea having been integrated into a practical application.
Independent claims 1, 9, and 10 discloses:
a non-transitory computer-readable recording medium, a processor, and a memory, which are generic computer components a that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea in a system.
a trained machine learning model is considered as merely instructions included within the memory to implement an abstract idea on a computer mainly because the model is already trained, which is now a program that does not change unless training is done again to change the parameter of the model. A trained model is, to person of ordinary skill in the art, a model that is no longer in training. Usually, a model’s internal parameters with in the hidden layers are only changed when the model is training. As such, a trained model’s internal parameters usually does not change as that may lead to inaccuracy. If applicant’s trained model will also lead to changes with the parameters, it will have to written into the claim language.
These limitations are recited at a high level of generality (i.e. as a general action or change being taken based on the results of the acquiring step) and amounts to mere post solution actions, which is a form of insignificant extra-solution activity. Further, the claims are claimed generically and are operating in their ordinary capacity such that they do not use the judicial exception in a manner that imposes a meaningful limit on the judicial exception. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea.
STEP 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No, the claims do not recite additional elements that amount to significantly more than the judicial exception.
With regard to STEP 2B, whether the claims recite additional elements that provide significantly more than the recited judicial exception, the guidelines specify that the pre-guideline procedure is still in effect. Specifically, that examiners should continue to consider whether an additional element or combination of elements:
adds a specific limitation or combination of limitations that are not well-understood, routine, conventional activity in the field, which is indicative that an inventive concept may be present; or
simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, which is indicative that an inventive concept may not be present.
Independent claims 1, 9, and 10 do not recite any additional elements that are not well-understood, routine or conventional. The use of a generic computer elements are routine, well-understood and conventional process that is performed by computers.
Thus, since independent claims 1, 9, and 10 are: (a) directed toward an abstract idea, (b) do not recite additional elements that integrate the judicial exception into a practical application, and (c) do not recite additional elements that amount to significantly more than the judicial exception, it is clear that independent claims 1, 9, and 10 are not eligible subject matter under 35 U.S.C 101.
Regarding claims 2 and 11: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitations:
“identifying a skeletal position of the tracked person by inputting the video of a first area in a store into the trained machine learning model; and
identifying the behavior that is performed by the tracked person with respect to the commodity product in the store based on the skeletal position relative to a position the product” falls into the mental processes grouping of abstract ideas.
Regarding claims 3 and 12: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitations:
“determining, when the person is situated in a first behavior process from among the plurality of behavior types in each of which the transition of the processes of the behaviors is defined, whether or not the person exhibits a behavior associated with a second behavior process that is a transition destination of the first behavior process; and
determining, when it is determined that the person has exhibited the behavior associated with the second behavior process, that the person has transitioned to the second behavior process” falls into the mental processes grouping of abstract ideas.
Regarding claims 4 and 13: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation:
“the transition of the processes of the behaviors is changed in the order of a first behavior process connected to attention and notice, a second behavior process connected to interest and curiosity, a third behavior process connected to a desire, a fourth behavior process connected to a comparison, and a fifth behavior process connected to a behavior” merely adds definition to previous limitation that do not add a meaningful limitation to the abstract idea.
Regarding claims 5 and 14: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitations:
“specifying a number of persons who have left without purchasing the commodity product at each of the first behavior types in a case where it is specified that each of the plurality of tracked persons has left without purchasing the commodity product” falls into the mental processes grouping of abstract idea;
“generating an image that indicates a proportion of persons who have left at each of the first behavior types based on the number of persons who have left at the first behavior type relative to a total number of the plurality of tracked persons” is insignificant post-solution activity of generating data that does not add a meaningful limitation to the abstract idea;
Regarding claims 6 and 15: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitations:
“storing, in an associated manner, attribute information on the tracked person and information on the process performed by the tracked person in a case where it is specified that the tracked person has left without purchasing the commodity product” is insignificant post-solution activity of gathering data that does not add a meaningful limitation to the abstract idea;
Regarding claims 8 and 17: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation:
“specifying, based on a distance between the plurality of extracted persons, a group between the plurality of extracted person” falls into the mental processes grouping of abstract ideas.
Regarding claims 18-20: the additional limitations do not integrate the mental process into a practical application or add significantly more to the mental process. The limitation:
“use an existing skeleton detection technology to generate skeleton information on the tracked person” is insignificant pre-solution activity of generating data that does not add a meaningful limitation to the abstract idea;
“estimate a pose or a motion of the tracked person by using pose estimation technology based on the skeleton information; and
specify the behavior based on the estimated pose or motion” falls into the mental processes grouping of abstract ideas.
The “existing skeleton detection technology” is generic computer components a that do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea in a system.
Regarding claims 7 and 16, the additional limitation:
“training a machine learning model that is used to detect a leaving person by using, as training data, at least one of the specified behavior exhibited by each of a purchaser who has purchased the commodity product and the leaving person who has left without purchasing the commodity product and attribute information on each of the purchaser and the leaving person” is NOT directed toward an abstract idea since it recites additional elements that integrate the judicial exception into a practical application and add significantly more that the judicial exception. Therefore, claims 7 and 16 are not directed to an abstract idea and therefore is/are not rejected under 35 USC 101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 3-4, 6-7, 9-10, 12-13, and 15-16 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Moon et al. (US 8,219,438 B1, hereinafter Moon).
Regarding claims 1, 9, and 10, Moon discloses
Claim 1: A non-transitory computer-readable recording medium having stored therein an information processing program that causes a computer to execute a process comprising:
Claim 9: An information processing method by a computer, the method comprising:
Claim 10: An information processing apparatus, comprising: a memory; and a processor coupled to the memory and configured to:
extracting a person from a video image in which a predetermined area in an inside of a store is captured (Col. 9 Lns. 51-53: “The system accepts two different sources of data for processing: the facial image sequence 633 and the body image sequence 715”, Col. 10 Lns. 25-27: “the system detects whether the shopper has approached a shelf space and is ready to interact with products in the shelf space”);
tracking the extracted person by analyzing the video image (Col. 10 Lns. 13-15: “Given the body image sequence 715, the body detection and tracking 722 step finds any human bodies in the view and individually tracks them.”);
specifying a behavior exhibited by the tracked person in the inside of the store by inputting the video image into a trained machine learning model (Col. 11 Lns. 37-41: “The body orientation estimation training 729 step feeds the body images detected from the previous step along with the annotated orientation of the body to the learning machines, so that the trained machines are used in the body orientation estimation 726 step);
specifying a first behavior type that is reached by the behavior exhibited by the tracked person and that is associated with a purchase psychological process for a commodity product in the store from among a plurality of behavior types in each of which a transition of processes of the behaviors for a commodity product in the inside of the store is defined (Fig. 18, Col. 10 Lns. 32-35: “The interaction recognition 779 step identifies such interaction behaviors with products-picking up products, reading the prices or label, or returning the products to the shelf”, Col. 15 Lns. 59-66: “FIG. 18 shows a model of shopper behaviors as observable manifestations of the shopper's mental process. Once a shopper is in the store, she starts browsing 772 through the store aisles. When she takes notice of a visual element-such as a product, a product category, or a promotion-that captures her interest 790 (or that she had an original intention 790 to buy), then she reveals her interest by stopping or turning toward it (attention 774 phase), Col. 16 Lns. 1-9: “she enters the engagement 776 phase where she further investigates products with an intention to purchase if she decides to do so. If she becomes serious about purchasing 796 one of the products, then the shopping is in the interaction 778 phase; she picks up a particular product to check the price, read labels, or make a comparison to other products. Once she makes a final decision to purchase 797, then she puts the product in the shopping basket or cart (purchase 780 phase).”, Behaviors 772-780 in Fig. 18 are all possible behavior types with mental process attach to the behaviors);
determining whether or not the tracked person has moved to outside a predetermined area (Col. 10 Lns. 28-30: “Once in the engagement mode, the shopper can quickly check the products and decide to leave the space to look for different products”); and
specifying, based on the first behavior type, when it is determined that the tracked person has moved to outside the predetermined area, whether the tracked person has purchased the commodity product or has left without purchasing the commodity product (Fig. 18, Fig. 19: See the middle decision flowchart, it is possible for the system to identify that the shopper didn’t purchase anything and continue browsing. Col. 10 Lns. 40-42: “Actual purchase can be detected in the purchase detection 781 step, based on the visual changes detected from the shelf space and the shopping cart”, Col. 16 Lns. 1-9: “she enters the engagement 776 phase where she further investigates products with an intention to purchase if she decides to do so. If she becomes serious about purchasing 796 one of the products, then the shopping is in the interaction 778 phase; she picks up a particular product to check the price, read labels, or make a comparison to other products. Once she makes a final decision to purchase 797, then she puts the product in the shopping basket or cart (purchase 780 phase).”).
Regarding claim 3, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
Moon further discloses
determining, when the person is situated in a first behavior process from among the plurality of behavior types in each of which the transition of the processes of the behaviors is defined, whether or not the person exhibits a behavior associated with a second behavior process that is a transition destination of the first behavior process; and determining, when it is determined that the person has exhibited the behavior associated with the second behavior process, that the person has transitioned to the second behavior process (Fig. 18, Col. 15 Lns. 59-66: “FIG. 18 shows a model of shopper behaviors as observable manifestations of the shopper's mental process. Once a shopper is in the store, she starts browsing 772 through the store aisles. When she takes notice of a visual element-such as a product, a product category, or a promotion-that captures her interest 790 (or that she had an original intention 790 to buy), then she reveals her interest by stopping or turning toward it (attention 774 phase)”).
Regarding claim 4, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
Moon further discloses
wherein the transition of the processes of the behaviors is changed in the order of a first behavior process connected to attention and notice, a second behavior process connected to interest and curiosity, a third behavior process connected to a desire, a fourth behavior process connected to a comparison, and a fifth behavior process connected to a behavior (Fig. 18: there are 5 behaviors, Col. 16 Lns. 1-9: “she enters the engagement 776 phase where she further investigates products with an intention to purchase if she decides to do so. If she becomes serious about purchasing 796 one of the products, then the shopping is in the interaction 778 phase; she picks up a particular product to check the price, read labels, or make a comparison to other products. Once she makes a final decision to purchase 797, then she puts the product in the shopping basket or cart (purchase 780 phase).”).
Regarding claim 6, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
Moon further discloses
storing, in an associated manner, attribute information on the tracked person and information on the process performed by the tracked person in a case where it is specified that the tracked person has left without purchasing the commodity product (Col. 10 Lns. 8-12: “The facial images, accurately localized based on the localized features in the facial feature localization 410 step, are fed to the demographics recognition 800 step to classify the face into demographic categories, such as gender, age, and ethnicity”, Col. 10 Lns. 28-30: “Once in the engagement mode, the shopper can quickly check the products and decide to leave the space to look for different products”).
Regarding claim 7, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
Moon further discloses
training a machine learning model that is used to detect a leaving person by using, as training data, at least one of the specified behavior exhibited by each of a purchaser who has purchased the commodity product and the leaving person who has left without purchasing the commodity product and attribute information on each of the purchaser and the leaving person (Col. 10 Lns. 8-12: “The facial images, accurately localized based on the localized features in the facial feature localization 410 step, are fed to the demographics recognition 800 step to classify the face into demographic categories, such as gender, age, and ethnicity”, Col. 10 Lns. 28-30: “Once in the engagement mode, the shopper can quickly check the products and decide to leave the space to look for different products”, Col. 10 Lns. 40-42: “Actual purchase can be detected in the purchase detection 781 step, based on the visual changes detected from the shelf space and the shopping cart”, Col. 15 Lns. 37-44: “FIG. 17 shows the gross-level interest estimation training 974 step in an exemplary embodiment of the present invention. In the embodiment, the track and body orientation history is used to estimate the visual target of interest. The step employs the Hidden Markov Model (HMM) as the state transition and uncertainty model that associates the shopper's movement (including the body orientation) with the target of the interest.”, Col. 15 Lns. 55-58: “HMM can be trained on a large number of training data using a standard HMM training algorithm such as the Baum-Welch algorithm.”).
Regarding claim 12, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 12 is rejected as applied to claim 3.
Regarding claim 13, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 13 is rejected as applied to claim 4.
Regarding claim 15, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 15 is rejected as applied to claim 6.
Regarding claim 16, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 16 is rejected as applied to claim 7.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 2, 8, 11, and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Moon et al. (US 8,219,438 B1, hereinafter Moon) in view of Zalewski et al. (US 2020/0118400 A1, hereinafter Zalewski).
Regarding claims 2, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
However, Moon does not disclose
identifying a skeletal position of the tracked person by inputting the video of a first area in a store into the trained machine learning model; and identifying the behavior that is performed by the tracked person with respect to the commodity product in the store based on the skeletal position relative to a position the product.
Zalewski teaches
identifying a skeletal position of the tracked person by inputting the video of a first area in a store into the trained machine learning model; and identifying the behavior that is performed by the tracked person with respect to the commodity product in the store based on the skeletal position relative to a position the product (Para [0169]: “This may be a case where a parent's shopping with children or a friend the shopping with other friends. Thus, motion detection and also skeletal and/or limb detection and tracking can be used two determine the motions, interactions, locations, and actions taken by the user”, Para [0173]: “Broadly speaking, the multiple sensors dispersed an integrated throughout the store can be collecting information while also collecting information regarding user's presence and/or users interacting with the specific items. Machine learning systems can also be integrated, so as to collect information regarding the sensors and that user interaction, which can be optimized to select or prioritize data from certain sensors to make assumptions”, Para [0726]: “ Broadly speaking, the various sensors integrated throughout the store, can be used to track the user's presence in various locations within the store, as the user shops. In some embodiments, the user's body outline or skeletal features can be tracked, in order to determine when the user is interacting with specific items, which may be of store shelves or in special compartments, surfaces, displays, regions, locations, throughout the store”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Moon with identifying skeletal of person and determine a behavior based on the skeletal position of Zalewski to effectively improve the efficiency for retailer and the experience for the customers.
Regarding claim 8, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
However, Moon does not disclose
specifying, based on a distance between the plurality of extracted persons, a group between the plurality of extracted person.
Zalewski teaches
specifying, based on a distance between the plurality of extracted persons, a group between the plurality of extracted person (Para [0206]: “Guidance messages may indicate the location in the store where a friend or family member is within the store. Facial scans may be taken of shoppers entering the store. Friend or family member determination may be predicted based on a model monitoring entry and store activity and may be reinforced by historical data showing clusters of matching faces defining a family, partners, etc. Clustering and determining of shopping groups may also be derived in whole or in part from third party services, such as social networks such as Facebook”, Para [0208]: “The entry lane be equipped with cameras or other sensors to determine the clustering of members of a same group as they pass through the entry lane. Each member may be weighed and identified. Upon exit, depending on the configuration of the weight scale, if another member or members of the group are occupying the exit weight scale, their weight, which is known, can be accommodated for, such that the incremental weight of the acquired goods can be compared against the calculated weight of the acquired goods.”).
Regarding claim 11, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 11 is rejected as applied to claim 2.
Regarding claim 17, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 17 is rejected as applied to claim 8.
Regarding claims 18, 19, 20, dependent upon claims 10, 1, and 9 respectively, Moon discloses all of the element as stated above regarding claims 10, 1, and 9.
However, Moon does not disclose
use an existing skeleton detection technology to generate skeleton information on the tracked person;
estimate a pose or a motion of the tracked person by using pose estimation technology based on the skeleton information; and
specify the behavior based on the estimated pose or motion.
Zalewski teaches
use an existing skeleton detection technology to generate skeleton information on the tracked person (Para [0169]: “This may be a case where a parent's shopping with children or a friend the shopping with other friends. Thus, motion detection and also skeletal and/or limb detection and tracking can be used two determine the motions, interactions locations, and actions taken by the user.”, Para [0173]: “Broadly speaking, the multiple sensors dispersed an integrated throughout the store can be collecting information while also collecting information regarding user's presence and/or users interacting with the specific items. Machine learning systems can also be integrated, so as to collect information regarding the sensors and that user interaction, which can be optimized to select or prioritize data from certain sensors to make assumptions”);
estimate a pose or a motion of the tracked person by using pose estimation technology based on the skeleton information (Para [0169]: “This may be a case where a parent's shopping with children or a friend the shopping with other friends. Thus, motion detection and also skeletal and/or limb detection and tracking can be used two determine the motions, interactions locations, and actions taken by the user.”, Para [0173]: “Broadly speaking, the multiple sensors dispersed an integrated throughout the store can be collecting information while also collecting information regarding user's presence and/or users interacting with the specific items. Machine learning systems can also be integrated, so as to collect information regarding the sensors and that user interaction, which can be optimized to select or prioritize data from certain sensors to make assumptions”, Para [0726]: “Broadly speaking, the various sensors integrated throughout the store, can be used to track the user's presence in various locations within the store, as the user shops. In some embodiments, the user's body outline or skeletal features can be tracked, in order to determine when the user is interacting with specific items, which may be of store shelves or in special compartments, surfaces, displays, regions, locations, throughout the store”); and
specify the behavior based on the estimated pose or motion (Para [0169]: “This may be a case where a parent's shopping with children or a friend the shopping with other friends. Thus, motion detection and also skeletal and/or limb detection and tracking can be used two determine the motions, interactions locations, and actions taken by the user.”, Para [0726]: “Broadly speaking, the various sensors integrated throughout the store, can be used to track the user's presence in various locations within the store, as the user shops. In some embodiments, the user's body outline or skeletal features can be tracked, in order to determine when the user is interacting with specific items, which may be of store shelves or in special compartments, surfaces, displays, regions, locations, throughout the store”).
Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Moon et al. (US 8,219,438 B1, hereinafter Moon) in view of Kondo et al. (JP 2016-146026 A, hereinafter Kondo).
Regarding claims 5, dependent upon claim 1, Moon discloses all of the element as stated above regarding claims 1.
Moon further discloses
specifying a number of persons who have left without purchasing the commodity product at each of the first behavior types in a case where it is specified that each of the plurality of tracked persons has left without purchasing the commodity product (Col. 10 Lns. 40-42: “Actual purchase can be detected in the purchase detection 781 step, based on the visual changes detected from the shelf space and the shopping cart”); and
However, Moon does not teach
specifying a number of persons who have left without purchasing the commodity product at each of the first behavior types in a case where it is specified that each of the plurality of tracked persons has left without purchasing the commodity product; and generating an image that indicates a proportion of persons who have left at each of the first behavior types based on the number of persons who have left at the first behavior type relative to a total number of the plurality of tracked persons.
Kondo teaches
specifying a number of persons who have left without purchasing the commodity product at each of the first behavior types in a case where it is specified that each of the plurality of tracked persons has left without purchasing the commodity product; and generating an image that indicates a proportion of persons who have left at each of the first behavior types based on the number of persons who have left at the first behavior type relative to a total number of the plurality of tracked persons (Fig. 5, 7, 16A, 16B, 17A, 18, Abstract: “a step of acquiring, by using a purchase history information on purchased products, which are products purchased by the customer in a predetermined period prior to the store entry period in the store 2 when there is no purchase by the customer during the period; and a step of updating a customer-by-customer inventory management table containing information on the customer and the purchased product if the purchased product is out of stock during the store entry period”, Kondo teaches that tallying and tracking the reason of customer not purchasing product is possible. The behaviors of Moon can be one of the type to be tracked.).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Moon with tracking and counting customers that did not but a product and produce a table that shows the of Kondo to effectively increase the efficiency in tracking the cause of a customer entering a store and left without purchasing.
Regarding claim 14, dependent upon claim 10, Moon discloses all of the element as stated above regarding claims 10. Claim 14 is rejected as applied to claim 5.
Relevant Prior Art Directed to State of Art
Kambara et al. (US 11,030,604 B2, hereinafter Kambara) is prior art not applied in the rejection(s) above. Kambara discloses an information processing system that, when a shopper purchases a product displayed in a shop, enables the automation of the purchase of the product and reduces the time required for purchasing the product.
Nagai (US 11,481,789 B2, hereinafter Nagai) is prior art not applied in the rejection(s) above. Nagai discloses an information processing apparatus comprising a traffic line information generation unit configured to generate, based on information provided from an image capturing apparatus arranged in a store, traffic line information in the store of a customer who visits the store, an acquisition unit configured to acquire purchased product information associated with a product purchased by the customer in the store, a purchase information generation unit configured to generate purchase information based on the purchased product information by associating the traffic line information and the purchased product with the customer, and a display information generation unit configured to generate, based on the purchase information, display information for displaying information about one of a customer and a product in association with a time taken to select the purchased product and a selection order of the purchased product.
Sharma (US 11,615,430 B1, hereinafter Sharma) is prior art not applied in the rejection(s) above. Sharma discloses method and system evaluates the effectiveness of a display location within a store based on a behavioral response analysis of shoppers in the vicinity. The effectiveness of a display location is measured by tracking shoppers in-store, extracting and processing shopper attributes, and extracting metrics based on the processed attributes. The metrics of one location is compared to the metrics of another location to determine overall effectiveness.
OKAMOTO et al. (US 2020/0364752 A1, hereinafter Okamoto) is prior art not applied in the rejection(s) above. Okamoto discloses a storefront device comprising a display device specifying unit configured to specify, among a plurality of display devices, a display device for displaying sales management information indicating merchandise acquired in a storefront by a processing subject person detected in the storefront; and a display timing determination unit configured to determine a timing at which the sales management information is to be displayed on the specified display device.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA CHEN whose telephone number is (703)756-5394. The examiner can normally be reached M-Th: 9:30 am - 4:30pm ET F: 9:30 am - 2:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, STEPHEN R KOZIOL can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J. C./ Examiner, Art Unit 2665
/Stephen R Koziol/ Supervisory Patent Examiner, Art Unit 2665