DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The following is a Final Office Action in response to communications received on 10/28/2025 Claims 1, 3, 4, 7-12 are currently pending and have been examined. Claims 1 and 11 have been amended. Claims 2, 5-6, and 13-23 are cancelled.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Step 1: The claims 1, 3, 4, 7-10 are a system, claims 11, 12 are a method, and claims 16-20 are a computer readable medium. Thus, each independent claim, on its face, is directed to one of the statutory categories of 35 U.S.C. §101. However, the claims 1, 3, 4, 7-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 2A Prong 1: The independent claims (1 and 11 taking claim 1 as a representative claim) recite:
A system for determining consumers of a specific item being sold in a retail establishment, the system comprising:
a plurality of cameras for capturing images of customers in said retail establishment,
with each of said images of customers being provided with a time stamp, at least one of said plurality of cameras being a first camera enabled to capture images of a point of sale terminal, at least one of said plurality of cameras being a second camera enabled to capture images of customers entering said retail establishment, at least one of said plurality of cameras being a third camera enabled to capture images of a specific area of said retail establishment, said specific area being an area where said specific item is displayed for sale, and a fourth camera enabled to capture images of customers leaving said retail establishment;
a database storing: purchase data for said retail establishment detailing time of purchase and product identification numbers for specific products at said retail establishment; and
a processing unit configured to:
receive outputs of said plurality of cameras;
create a profile for each customer entering said retail establishment and storing said profile in a collection of profiles who are still in said retail establishment;
generate metadata for each customer after said customer has entered said retail establishment, wherein said profile for each customer is based on said metadata;
maintain said collection of profiles of customers who are still in said retail establishment by analyzing images of customers leaving said retail establishment and removing removed profiles from said collection of profiles,
said removed profiles being profiles in said collection of profiles that have been assigned to images of customers that match images of customers leaving said retail establishment;
determine locations of customers in said retail establishment by analyzing images of customers and assigning images of customers to profiles of customers still in said retail establishment based on metadata in said profiles;
based on said locations of said customers in said retail establishment,
determine which customers are located in a specific area, said specific area being either an area where said specific item is displayed or an area where marketing material relating to said specific item is displayed;
retrieve purchase data and determine, from said purchase data, times of purchase for said specific item;
determine, based on said times of purchase and based on time stamps for images of customers at said point of sale terminal, corresponding imaged customers and profiles assigned to said imaged customers to thereby determine profiles of customers who have purchased said specific item;
generate at least one record that comprises demographic data of customers who have purchased said specific item, said at least one record being based on said profiles of customers who have purchased said specific item;
determine which of said customers who were in said specific area have purchased said specific item;
generating a report based on said at least one record and on which of said customers who were in said specific area have purchased said specific item,
said report detailing demographics of customers who were exposed to said marketing material or to said specific item and who purchased said specific item and how many customers who were exposed to said marketing material or to said specific item purchased said specific item,
wherein at least one of said plurality of cameras is placed to capture images of customers entering said retail establishment;
at least one of said plurality of cameras is placed to capture images of at least one of said customers purchasing one of said products at said retail establishment;
reports for different products are generated and said reports are compared to determine which common customers are purchasing all of said different products;
said reports comprise reports generated for said specific product at different retail establishments, to thereby determine demographics of customers purchasing said specific product at different retail establishments;
and profiles of customers at said retail establishment are aggregated together to thereby determine a common profile for the majority of said retail establishment's customers.
These limitations, except for the italicized portions, under their broadest reasonable interpretations, recite certain methods of organizing human activity for managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) as well as commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations). The claimed invention recites steps for gathering data about customers, purchases, demographics, and marketing effectiveness through the capture and analysis of images. The steps under its broadest reasonable interpretation specifically fall under sales activities. The Examiner notes that although the claim limitations are summarized, the analysis regarding subject matter eligibility considers the entirety of the claim and all of the claim elements individually, as a whole, and in ordered combination.
Prong 2: This judicial exception is not integrated into a practical application. In particular, the claims recite the additional elements of
A system
a plurality of cameras
at least one of said plurality of cameras being a first camera
at least one of said plurality of cameras being a second camera
at least one of said plurality of cameras being a third camera
and a fourth camera enabled
a database
a processing unit configured to:
wherein at least one of said plurality of cameras
at least one of said plurality of cameras is placed
The additional elements of A system; a plurality of cameras; at least one of said plurality of cameras being a first camera; at least one of said plurality of cameras being a second camera; at least one of said plurality of cameras being a third camera; and a fourth camera enabled; a database; a processing unit configured to; wherein at least one of said plurality of cameras; at least one of said plurality of cameras is placed are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function of processing data) such that it amounts no more than mere instructions to apply the exception using a generic computer component. This/these limitation(s) do/does not impose any meaningful limits on practicing the abstract idea, and therefore do/does not integrate the abstract idea into a practical application – MPEP 2106.05(f).
Accordingly, these additional elements when considered individually or as a whole do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The independent claims are directed to an abstract idea.
Step 2B: The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A Prong two, the additional elements in the claims amount to no more than mere instructions to apply the judicial exception using a generic computer component and generally linking the judicial exception to a particular technological environment.
Even when considered as an ordered combination, the additional elements of claims 1 and 11 do not add anything that is not already present when they are considered individually. Therefore, under Step 2B, there are no meaningful limitations in claims 1 and 11 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (see MPEP 2106.05).
The analysis above also applies to independent claim 11 and it recites parallel claim language to claim 1.
As such, independent claims 1 and 11 are ineligible.
Dependent claims 3, 4, 7-10, 12 when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. §101 because the additional recited limitations fail to establish that the claims are not directed to the same abstract idea of Independent Claims 1 and 11 without significantly more.
Claim 3 recites wherein said outputs of said plurality of cameras are transmitted to said processing unit and said processing unit is physically remote from said retail establishment. The claim recites the additional element of the processing unit at a high level of generality and therefore does not integrate the judicial exception into a practical application.
Claim 4 recites wherein said demographic data is produced by said processing unit correlating said outputs of said plurality of cameras with said purchase data. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claim 7 recites wherein said outputs of said plurality of cameras are analyzed to determine how many customers purchased products from said retail establishment. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claim 8 recites wherein said outputs of said plurality of cameras are analyzed to enable tracking of at least one of said customers through said demographic data determined for said at least one of said customers. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claim 9 recites wherein said outputs of said plurality of cameras are correlated with said purchase data and said demographics of said at least one of said customers to determine which customer has purchased which products from said retail establishment. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claim 10 recites wherein said outputs of said plurality of cameras are correlated with said purchase data and contents of said database to determine an effectiveness of marketing material placement in said retail establishment. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
Claim 12 recites wherein said demographic data is used to label and track customers in said retail establishment. The limitation merely further limits the abstract idea and does not integrate the judicial exception into a practical application.
For these reasons claims 1, 3, 4, 7-12 are rejected under 35 U.S.C. 101.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 3, 4, 7-12 are rejected under 35 U.S.C. 103 as being unpatentable over Chan (US 20190095443) in view of “Video Analytics for Retail” (NPL) in view of Linden (US 20160162913) in further view of Sharma (US 11615430).
Regarding claims 1 and 11, Chan discloses:
A system for determining consumers of a specific item being sold in a retail establishment, the system comprising: (claim 1)
A method for determining consumers of a specific item being sold in a retail establishment, the method comprising: (claim 11)
a plurality of cameras for capturing images of customers in said retail establishment, [0047] In some embodiments of the present invention, user identification program 106 performs sensor based methods (e.g., cameras, depth sensors, temperature sensors, among others) for identifying the users and detecting interactions between the user and the site location using one or more site sensors 116.
at least one of said plurality of cameras being a second camera enabled to capture images of customers entering said retail establishment, [0059] User A walks into a department store where site sensors 116 begin to capture his biometric data while User A is inside the store. And [0127] discloses a plurality of site sensors
at least one of said plurality of cameras being a third camera enabled to capture images of a specific area of said retail establishment, said specific area being an area where said specific item is displayed for sale, [0047] In some embodiments of the present invention, user identification program 106 performs sensor based methods (e.g., cameras, depth sensors, temperature sensors, among others) for identifying the users and detecting interactions between the user and the site location using one or more site sensors 116. In some embodiments, user behavior is determined by factors such as the time spent looking at a specific product, product characteristics (e.g., clearance or sales products) or product categories (e.g., clothes for 3-5 years old babies), time spent at a specific aisle, identification of products for which the user has performed a price check, path taken in the store, among others.
and a fourth camera enabled to capture images of customers leaving said retail establishment; [0085] Processing continues at operation 960, where user identification program 106 determines whether the user is entering or exiting the premises
a database storing: [0146] Processing continues at operation 2180, where user identification program 106 stores the transaction data for user ID. In some embodiments of the present invention, once the transaction data is associated to the social network user ID, user identification program 106 may store the transaction in database 108 as part of the customer characteristics and the customer behaviors used to identify the user and/or provide analytics for the user.
a processing unit configured to: (processing environment in Figure 1)
receive outputs of said plurality of cameras; [0127] In some embodiments of the present invention, one or more site sensors 116 at retail store 1402 capture a set on input images and determine user attributes and user behaviors for one or more users. In some embodiments of the present invention, receiving the user attributes (e.g., identity patterns) and the user behaviors for a user may be based on one or more embodiments described in this disclosure.
create a profile for each customer entering said retail establishment and storing said profile in a collection of profiles who are still in said retail establishment; [0083] User identification program 106 is depicted and described in further detail with respect to FIG. 9. Referring to flowchart 900, user identification program 106 creates and manages temporary user profiles for users in a specific location (e.g., a store or a restaurant).
generate metadata for each customer after said customer has entered said retail establishment, wherein said profile for each customer is based on said metadata; [0089] Processing continues at operation 980, where user identification program 106 creates a temporary profile for the user based on the identity pattern and the user behavior. Continuing our exemplary embodiment, user identification program 106 user identification program 106 creates a temporary profile including the images and user behavior for User A.
maintain said collection of profiles of customers who are still in said retail establishment by analyzing images of customers leaving said retail establishment [0083] User identification program 106 is depicted and described in further detail with respect to FIG. 9. Referring to flowchart 900, user identification program 106 creates and manages temporary user profiles for users in a specific location (e.g., a store or a restaurant). [0084] Processing begins at operation 955, where user identification program 106 receives a set of input images. In some embodiments of the present invention, user identification program 106 captures a set of input images for a user from site sensors 116. In an exemplary embodiment, User A enters a department store and user identification program 106 receives a set of input images (shown in FIG. 12) corresponding to User A. [0085] Processing continues at operation 960, where user identification program 106 determines whether the user is entering or exiting the premises.
and removing removed profiles from said collection of profiles, said removed profiles being profiles in said collection of profiles that have been assigned to images of customers that match images of customers leaving said retail establishment; [0090] If the user is exiting the premises (operation 960, “exit” branch”), processing proceeds at operation 985, where user identification program 106 stores the temporary user profile in database 108 and deletes the local temporary profile. In our exemplary embodiment, User A exits the store and user identification program 106 stores the temporary user profile for User A in database 108 and deletes the local temporary file. [0095] Processing continues at operation 1070, where user identification program 106 matches the set of input images and the user behavior to a user profile. In our exemplary embodiment, user identification program 106 matches each profile image 1202, 1204, 1206, 1208, and 1210 in the set of input images with the profile images 1212, 1214, and 1216 associated with a user profile for Ben.
determine locations of customers in said retail establishment by analyzing images of customers [0094] Processing proceeds at operation 1065, where user identification program 106 determines a user behavior based on the set of input images. Examples of user behavior include: (a) interaction with objects; and (b) time at specific locations. Continuing our exemplary embodiment, user identification program 106 tracks the user behavior of User A throughout the store in order to identify his user profile in database 108. User A spends twenty minutes in the electronics department looking for flat-screen television sets.
and assigning images of customers to profiles of customers still in said retail establishment based on metadata in said profiles; [0089] Processing continues at operation 980, where user identification program 106 creates a temporary profile for the user based on the identity pattern and the user behavior. Continuing our exemplary embodiment, user identification program 106 user identification program 106 creates a temporary profile including the images and user behavior for User A.
based on said locations of said customers in said retail establishment, determine which customers are located in a specific area, said specific area being either an area where said specific item is displayed or an area where marketing material relating to said specific item is displayed; [0094] Processing proceeds at operation 1065, where user identification program 106 determines a user behavior based on the set of input images. Examples of user behavior include: (a) interaction with objects; and (b) time at specific locations. Continuing our exemplary embodiment, user identification program 106 tracks the user behavior of User A throughout the store in order to identify his user profile in database 108. User A spends twenty minutes in the electronics department looking for flat-screen television sets.
wherein at least one of said plurality of cameras is placed to capture images of customers entering said retail establishment; and [0059] User A walks into a department store where site sensors 116 begin to capture his biometric data while User A is inside the store. And [0127] discloses a plurality of site sensors
While Chan discloses the creation of user profiles from a plurality of sensors to include cameras to reference does not expressly disclose:
with each of said images of customers being provided with a time stamp,
at least one of said plurality of cameras being a first camera enabled to capture images of a point of sale terminal,
purchase data for said retail establishment detailing time of purchase and product identification numbers for specific products at said retail establishment; and
retrieve purchase data and determine, from said purchase data, times of purchase for said specific item;
determine, based on said times of purchase and based on time stamps for images of customers at said point of sale terminal, corresponding imaged customers and profiles assigned to said imaged customers to thereby determine profiles of customers who have purchased said specific item;
generate at least one record that comprises demographic data of customers who have purchased said specific item, said at least one record being based on said profiles of customers who have purchased said specific item;
determine which of said customers who were in said specific area have purchased said specific item; -generating a report based on said at least one record and on which of said customers who were in said specific area have purchased said specific item,
said report detailing demographics of customers who were exposed to said marketing material or to said specific item and who purchased said specific item and how many customers who were exposed to said marketing material or to said specific item purchased said specific item,
at least one of said plurality of cameras is placed to capture images of at least one of said customers purchasing one of said products at said retail establishment
reports for different products are generated and said reports are compared to determine which common customers are purchasing all of said different products;
said reports comprise reports generated for said specific product at different retail establishments, to thereby determine demographics of customers purchasing said specific product at different retail establishments; and
profiles of customers at said retail establishment are aggregated together to thereby determine a common profile for the majority of said retail establishment's customers.
However Video Analytics for Retail teaches:
with each of said images of customers being provided with a time stamp, (See Figure 2 and Figure 3 with timestamps)
at least one of said plurality of cameras being a first camera enabled to capture images of a point of sale terminal, [page 426] “Register video” and “Customer video” columns contain links to video from two cameras that may be available — one of the register itself for observing cash drawer and cashier activity, and one (potentially the same) of the customer which will also show items being bought.
purchase data for said retail establishment detailing time of purchase and product identification numbers for specific products at said retail establishment; and (see table in Figure 4 showing Time and SKU for purchases)
retrieve purchase data and determine, from said purchase data, times of purchase for said specific item; (see table in Figure 4 showing Time and SKU for purchases)
determine, based on said times of purchase and based on time stamps for images of customers at said point of sale terminal, corresponding imaged customers and profiles assigned to said imaged customers to thereby determine profiles of customers who have purchased said specific item; Figure 4 stores the video of the customer, the time, and the item number of the item purchased
at least one of said plurality of cameras is placed to capture images of at least one of said customers purchasing one of said products at said retail establishment. [page 426] “Register video” and “Customer video” columns contain links to video from two cameras that may be available — one of the register itself for observing cash drawer and cashier activity, and one (potentially the same) of the customer which will also show items being bought.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify image analysis of Chan to include the limitations above taught in Visual Analytics for Retail, in order to plan merchandising activities based on similar analytics- choosing the location of a display based on customer paths, as well as measuring the effectiveness of a display based on customer counts coupled with sales figures (page 2).
While Chan teaches the creation of user profiles based on the image analysis at an establishment and Video Analytics for Retail teaches the correlation of purchase times and register video tracking each disclosing a plurality of cameras for customers visiting an establishment, the combination does not explicitly disclose:
generate at least one record that comprises demographic data of customers who have purchased said specific item, said at least one record being based on said profiles of customers who have purchased said specific item;
determine which of said customers who were in said specific area have purchased said specific item;
generating a report based on said at least one record and on which of said customers who were in said specific area have purchased said specific item,
said report detailing demographics of customers who were exposed to said marketing material or to said specific item and who purchased said specific item and how many customers who were exposed to said marketing material or to said specific item purchased said specific item,
reports for different products are generated and said reports are compared to determine which common customers are purchasing all of said different products;
said reports comprise reports generated for said specific product at different retail establishments, to thereby determine demographics of customers purchasing said specific product at different retail establishments; and
profiles of customers at said retail establishment are aggregated together to thereby determine a common profile for the majority of said retail establishment's customers.
However Linden teaches:
generate at least one record that comprises demographic data of customers who have purchased said specific item, said at least one record being based on said profiles of customers who have purchased said specific item; (Shown in Figure 5 Merchant insights for customers purchasing books by John Smith;[0082] The merchant insights manager 206 can also include information corresponding to the identified users in one or more merchant insights. For example, the merchant insights manager 206 may provide the merchant, in merchant insights, with the number of users who visited the merchant's retail location, or a specific area, during a specified time period (e.g., 55 users visited the downtown location last month, 4,000 users visited locations located in Nevada last week, 200 users have lingered over 15 minutes in the hardware department in the last two weeks, etc.), common attributes shared between the identified users (e.g., 33% of identified users enjoy outdoor activities, 45% of identified users buy organic food, the majority of identified users belong to a social networking group or professional group, etc.), demographics of the identified users (80% of identified users are female, 28% of identified users are over the age of 65, 20 users live within 2 miles of the downtown location, etc.), time of visits (e.g., on weekdays: 15% visit before 11 am, 25% visit between 11 am and 2 pm, 20% visit between 2 pm and 5 pm, 40% visit from 5 pm to closing), etc.;[0084] In additionally or alternative embodiments, the merchant insights manager 206 may generate merchant insights upon receiving a request from the merchant. For example, a merchant may request merchant insights for a product category and the merchant insights manager 206 may generate merchant insights for the product category, or one or more products or product brands, within the product category.
determine which of said customers who were in said specific area have purchased said specific item; -generating a report based on said at least one record and on which of said customers who were in said specific area have purchased said specific item, (The majority of the user who purchase John Smith books frequent shop the Historical Fiction Section in Figure 5)
reports for different products are generated and said reports are compared to determine which common customers are purchasing all of said different products; [0081] In some example embodiments, the merchant insights manager 206 can include, in merchant insights, statistical information about a product, product brand, and/or product category. For example, the merchant insights manager 206 can include statistical information corresponding to a product brand recommended to the merchant. To illustrate, the merchant insights manager 206 may provide total sales for the product brand over a time period (e.g., all time, past year, past month), where identified users are otherwise purchasing the product brand (e.g., online websites, such as Amazon.com, Wal-Mart.com, eBay, etc.), price points for the product brand (e.g., average price, most common prices, price ranges, mean prices, median prices, shipping costs, tax, etc.), how frequently an identified user purchases the product brand (e.g., weekly, monthly, every two months, four times a year, etc.), whether a product brand is seasonal (e.g., when peak purchase times are for the product brand, such as during summer months), and other statistics associated with the recommended product brand. FIG. 5 below illustrates and discusses various non-limiting examples of information that merchant insights manager 206 can present to a merchant as merchant insights.
said reports comprise reports generated for said specific product at different retail establishments, to thereby determine demographics of customers purchasing said specific product at different retail establishments; and [0105] The merchant insights can also include descriptive recommendations. [0105] For example, the merchant insights may recommend that Evening Star's Book Boutique offer books by author John Smith, which are published by Solitude Publishers. Further, the merchant insights may provide support for product, product brands, and/or product category recommendation. For example, as shown in GUI 500, the merchant insights may indicate that 75% of identified users who visit the downtown location purchase John Smith books online.
profiles of customers at said retail establishment are aggregated together to thereby determine a common profile for the majority of said retail establishment's customers. [0082] The merchant insights manager 206 can also include information corresponding to the identified users in one or more merchant insights. For example, the merchant insights manager 206 may provide the merchant, in merchant insights, with the number of users who visited the merchant's retail location, or a specific area, during a specified time period (e.g., 55 users visited the downtown location last month, 4,000 users visited locations located in Nevada last week, 200 users have lingered over 15 minutes in the hardware department in the last two weeks, etc.), common attributes shared between the identified users (e.g., 33% of identified users enjoy outdoor activities, 45% of identified users buy organic food, the majority of identified users belong to a social networking group or professional group, etc.), demographics of the identified users (80% of identified users are female, 28% of identified users are over the age of 65, 20 users live within 2 miles of the downtown location, etc.), time of visits (e.g., on weekdays: 15% visit before 11 am, 25% visit between 11 am and 2 pm, 20% visit between 2 pm and 5 pm, 40% visit from 5 pm to closing), etc.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify image analysis of Chan in view of Visual Analytics for Retail, with the limitations above, as taught in Linden, in order to allow the merchant to gain insights that provide the ability to capitalize on a business opportunity (paragraph 006).
While Chan teaches the creation of user profiles based on the image analysis at an establishment and Video Analytics for Retail teaches the correlation of purchase times and register video tracking each disclosing a plurality of cameras for customers visiting an establishment, and Linden teaches the generation of merchant analytics for a physical location, the combination does not explicitly disclose:
said report detailing demographics of customers who were exposed to said marketing material or to said specific item and who purchased said specific item and how many customers who were exposed to said marketing material or to said specific item purchased said specific item
However Sharma teaches:
said report detailing demographics of customers who were exposed to said marketing material or to said specific item and who purchased said specific item and how many customers who were exposed to said marketing material or to said specific item purchased said specific item (see Figure 11 and 12 detailing the traffic count, demographics, exposure and ultimate conversion rates for people interacting with a display for a product in location N)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify image analysis of Chan in view of Visual Analytics for Retail in further view of Linden to include said report detailing demographics of customers who were exposed to said marketing material or to said specific item and who purchased said specific item and how many customers who were exposed to said marketing material or to said specific item purchased said specific item, as taught in Sharma, in order to determine the effectiveness of product displays (Col. 3 lines 20-25).
Regarding claim 3, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above. While Chan teaches the creation of user profiles based on the image analysis at an establishment and Video Analytics for Retail teaches the correlation of purchase times and register video tracking each disclosing a plurality of cameras for customers visiting an establishment, and Linden teaches the generation of merchant analytics for a physical location, the combination does not explicitly disclose:
wherein said outputs of said plurality of cameras are transmitted to said processing unit and said processing unit is physically remote from said retail establishment.
However Sharma teaches:
wherein said outputs of said plurality of cameras are transmitted to said processing unit and said processing unit is physically remote from said retail establishment. [Col. 17 lines 20-25] The means for control and processing, as well as the means for video/signal interface, can be placed locally or remotely, as long as the connection to the means for capturing images and the means for detecting mobile device signals can be established.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify image analysis of Chan in view of Visual Analytics for Retail in further view of Linden, wherein said outputs of said plurality of cameras are transmitted to said processing unit and said processing unit is physically remote from said retail establishment, as taught in Sharma, in order to determine the effectiveness of product displays (Col. 3 lines 20-25).
Regarding claim 4, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above.
Chan further discloses: wherein said demographic data is produced by said processing unit correlating said outputs of said plurality of cameras [0047] In some embodiments of the present invention, user identification program 106 performs sensor based methods (e.g., cameras, depth sensors, temperature sensors, among others) for identifying the users and detecting interactions between the user and the site location using one or more site sensors 116.) with said purchase data. [0146] Processing continues at operation 2180, where user identification program 106 stores the transaction data for user ID. In some embodiments of the present invention, once the transaction data is associated to the social network user ID, user identification program 106 may store the transaction in database 108 as part of the customer characteristics and the customer behaviors used to identify the user and/or provide analytics for the user.
Regarding claim 7, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above.
Chan further discloses wherein said outputs of said plurality of cameras [0047] In some embodiments of the present invention, user identification program 106 performs sensor based methods (e.g., cameras, depth sensors, temperature sensors, among others) for identifying the users and detecting interactions between the user and the site location using one or more site sensors 116.)
However Chan does not disclose:
are analyzed to determine how many customers purchased products from said retail establishment.
However Linden discloses:
are analyzed to determine how many customers purchased products from said retail establishment. (see purchase measures 498: number of buyers, total sales, average basket size)
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify image analysis of Chan in view of Visual Analytics for Retail, are analyzed to determine how many customers purchased products from said retail establishment, as taught in Linden, in order to allow the merchant to gain insights that provide the ability to capitalize on a business opportunity (paragraph 006).
Regarding claim 8, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above.
Chan further discloses:
wherein said outputs of said plurality of cameras [0047] In some embodiments of the present invention, user identification program 106 performs sensor based methods (e.g., cameras, depth sensors, temperature sensors, among others) for identifying the users and detecting interactions between the user and the site location using one or more site sensors 116. are analyzed to enable tracking of at least one of said customers through said demographic data determined for said at least one of said customers. [0049] In some embodiments of the present invention, user identification program 106 associates the user behavior captured at different times. The association can be done using a path tracking method, where the user is tracked from location A to location B. This path tracking method will ensure it is the same user at location A and location B, and therefore the behavior belongs to the same user. And see [0048]
Regarding claim 9, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above.
Chan further discloses:
wherein said outputs of said plurality of cameras [0047] In some embodiments of the present invention, user identification program 106 performs sensor based methods (e.g., cameras, depth sensors, temperature sensors, among others) for identifying the users and detecting interactions between the user and the site location using one or more site sensors 116. are correlated with said purchase data and said demographics of said at least one of said customers to determine which customer has purchased which products from said retail establishment. [0146] Processing continues at operation 2180, where user identification program 106 stores the transaction data for user ID. In some embodiments of the present invention, once the transaction data is associated to the social network user ID, user identification program 106 may store the transaction in database 108 as part of the customer characteristics and the customer behaviors used to identify the user and/or provide analytics for the user.
Regarding claim 10, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above. While Chan teaches the creation of user profiles based on the image analysis at an establishment and Video Analytics for Retail teaches the correlation of purchase times and register video tracking each disclosing a plurality of cameras for customers visiting an establishment, and Linden teaches the generation of merchant analytics for a physical location, the combination does not explicitly disclose:
wherein said outputs of said plurality of cameras are correlated with said purchase data and contents of said database to determine an effectiveness of marketing material placement in said retail establishment
However Sharma teaches:
wherein said outputs of said plurality of cameras (see cameras 101, 111, 112, 115, 116 in Figure 1) are correlated with said purchase data and contents of said database to determine an effectiveness of marketing material placement in said retail establishment. ([Col. 8 lines 53-67] Based on the information representation of the shopping trips, the location effectiveness estimation layer 750 generates the pre-defined location effectiveness measures 770 by automatic Big Data processing 751 or semi-automatic processing 752 that entails some amount of manual processing by human operators. The pre-defined location effectiveness measures 770 may include various in-store traffic information and the conversion rates among the shopping stages mentioned earlier. Such pre-defined location effectiveness measures 770 may be analyzed in different dimensions including product, brand, demographic segment, and time. For example, the system may generate estimation on the location effectiveness of a particular product for a particular demographic segment during a certain season or time period. The dimensions may be combined together to define a narrower space to be analyzed by the Big Data processing 751 or semi-automatic processing 752 module to extract more targeted trends or unique characteristics.
Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify image analysis of Chan in view of Visual Analytics for Retail in further view of Linden, wherein said outputs of said plurality of cameras are correlated with said purchase data and contents of said database to determine an effectiveness of marketing material placement in said retail establishment, as taught in Sharma, in order to determine the effectiveness of product displays (Col. 3 lines 20-25).
Regarding claim 12, Chan in view of Video Analytics in view of Linden in further view of Sharma teaches the limitations set forth above.
Chan further discloses:
wherein said demographic data is used to label and track customers in said retail establishment. wherein said demographic data is used to label and track customers in said retail establishment. [0049] In some embodiments of the present invention, user identification program 106 associates the user behavior captured at different times. The association can be done using a path tracking method, where the user is tracked from location A to location B. This path tracking method will ensure it is the same user at location A and location B, and therefore the behavior belongs to the same user.
Response to Arguments
Applicant's arguments filed 10/28/2025 have been fully considered but they are not persuasive for the reasons set forth below.
With respect to the prior art rejection, the rejection has been updated above in light of the claim amendments. Moore is no longer relied upon and the Sharma reference is no longer the primary reference, relied upon only to teach the exposure calculation. The examiner has updated the rejection to now rely upon Chan as the primary reference and to teach the added claim limitations along with newly cited Linden. Newly cited Linden collects customer information using sensor data to determine merchant insights for a merchant.
With respect to the rejection under 35 USC 101, the examiner asserts that the creation of the common profile at most improves the abstract idea allowing a retailer improved insights into their customer and product information. Aggregating the information into a common profile merely results in a consequential reduction in what the processor needs to process and is not a technical improvement. The mere reduction of data needed to process would logically result in less processing power; the claimed invention itself does not improve the manner in which the computer technology process data.
For at least these reasons the claims remain rejected under 35 USC 101 and 103.
Relevant Art Not Cited
LaCroix US 20140362223 discloses a plurality of video cameras capturing data about a customer interacting with a display and providing and advertising message associated with the display and/or interaction with the product.
PYDYNOWSKI US 20180075468 discloses [0022] Disclosed tool embodiments may operate upon aggregated data relating to customers, which may be categorized and filtered by age, gender, transaction frequency, location frequency, or consumer engagement level (e.g., level of spending, number of purchased items per visit, etc.), among other demographics. Disclosed tool embodiments may further operate upon aggregated data relating to merchants, which may be categorized and filtered by merchant name, industry, industry sub-category, new or existing locations, etc. Disclosed tool embodiments may further operate upon aggregated data relating to individual transactions, which may be categorized and filtered by time of day, day of week, and purchase channel, among other transaction attributes.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regardi