DETAILED ACTION
This is in reference to communication received 26 November 2025. Cancellation of claims 3, 8, 12 and 17 is acknowledged. Claims 1, 2, 4-7, 9-11, 13-16 and 18-20 are pending for examination. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 2, 4-7, 9-11, 13-16 and 18-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more.
Independent claim 1, representative of claims 10 and 19, in part is directed toward a statutory category of invention, the claim appears to be directed toward a judicial exception namely an abstract idea. Claim 1 recites invention directed to receiving historical data, processing the received data to group data associated with one of the customers and identified as grouped data record of the customer; when a customer is identified to be present at a location; profile data of the customer is referenced to determine said customer’s attributes and likely path of the customer may travel is predicted based on customer behavior, and a targeted advertisement based upon the customer attributes is presented to the customer at the identified location, which, pursuant to MPEP 2106.04, is aptly categorized as a method of organizing human activity (i.e. advertising). Therefore, under Step 2A, Prong One, the claims recite a judicial exception.
Next, the aforementioned claims recite additional functional elements that are associated with the judicial exception, including: creating, by the processor using the received data, a set of customer embeddings for the set of customers, each customer embedding including a vector representation of an image of a respective customer from the set of customers and linking customers with the customer embedding. Examiner understands these limitations to be insignificant extrasolution activity. (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Cf. Diamond v. Diehr, 450 U.S. 175, 191-192 (1981) ("[I]nsignificant post-solution activity will not transform an unpatentable principle into a patentable process.”). The independent claim further recites functional element of predicting, at least one of a demand value or a likely path of a customer from the set of customers toward a location based on at least one of the set of customer behaviors or the set of customer attributes (results of the prediction are not utilized in the invention as currently claimed). Examiner understands this limitations to be insignificant extrasolution activity. (See Accenture, 728 F.3d 1336, 108 U.S.P.Q.2d 1173 (Fed. Cir. 2013), citing Cf. Diamond v. Diehr, 450 U.S. 175, 191-192 (1981). ("[I]nsignificant post-solution activity will not transform an unpatentable principle in-to a patentable process.”)
The aforementioned claims also recite additional technical elements including: “one or more processors” for executing computer-executable instructions (software); “data repository” for storing data; “a server having a processor” for executing computer-executable instructions (software), using a machine-learning model to determine customer behavior and determination of the likely travel path of the customer. These limitations are recited at a high level of generality, and appear to be nothing more than generic computer components. Claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible. Alice Corp., 134 S. Ct. at 2358, 110 USPQ2d at 1983. See also 134 S. Ct. at 2389, 110 USPQ2d at 1984.
Represented claims 10 and 19, which do recite statutory categories (machine, product of manufacture, for example), the same analysis as above applies to these claims since the method steps are the same. However, the judicial exception is not integrated into a practical application. These claims add the generic computer components (additional elements) of a a server comprising processor(s) and computer executable instructions cause the server to perform the method addressed above (claim 10) and a system comprising one or more hardware processors and a memory to perform the method addressed above (claim 10).
The processor, memory, and non-transitory machine-readable medium are recited at a high-level of generality such that they amount to no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of the processor, memory, and non-transitory machine-readable medium amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are not patent eligible.
When taken as an ordered combination, nothing is added that is not already present when the elements are taken individually. When viewed as a whole, the marketing activities amount to instructions applied using generic computer components.
As for dependent claims 2, 4 – 7, 9 – 11, 13 – 16, 18 and 20 dependent on the aforementioned independent claims, and include all the limitations contained therein. These claims do not recite any additional technical elements, and simply disclose additional limitations that further limit the abstract idea of transmitting to a customer a recommendation corresponding to predicted demand value or likely path; details regarding descriptions of various data, what decisions may be made by the merchant based upon the received fraudulent score, what algorithm and what data elements will be used to generate fraudulent score. Thus, the dependent claims merely provide additional non-structural (and predominantly non-functional) details that fail to meaningfully limit the claims or the abstract idea(s).
Therefore, claims 1 – 2, 4 – 7, 9 – 11, 13 – 16 and 18 – 20 are not drawn to eligible subject matter, as they are directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2, 4-7, 9-11, 13-16 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable The Hated One YouTube video “Facial Recognition Advertising Is Actually a Thing” hereinafter referred to as The-Hated-One in view of Pillai et al. US Patent 9,940,583 and Taskiran et al. published article “Face Recognition – Past Present and Future” hereinafter referred to as Taskiran.
Regarding claim 1 and representative claims 10 and 19, The-Hated-One teaches Facial recognition advertising as depicted in Minority Report is coming to your local stores sooner than you thought. The technology is becoming so mainstream it will become a part of the new social contract [The-Hated-One, page 1], comprising:
one or more processors (The-Hated-One, Several US retailers, including Walgreens and Kroger, are piloting facial recognition advertising in physical stores. The goal is to use all available information from the cameras to target customers with relevant ads in a similar way online advertising targets Internet users [The-Hated-One, page 1]; and
one or more computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations (The-Hated-One, Several US retailers, including Walgreens and Kroger, are piloting facial recognition advertising in physical stores. The goal is to use all available information from the cameras to target customers with relevant ads in a similar way online advertising targets Internet users [The-Hated-One, page 1] comprising:
a data repository (The-Hated-One, Facebook (e.g., now knows as Meta) has one of the largest databases of pictures of people faces and social network has been implementing facial recognition since 2011) [The-Hated-One, page 25 – 26]; and
a server having a processor (e.g., www.FaceBook.com website) configured to:
receiving, by a processor, data associated with a set of customers (The-Hated-One, FaceDeals, an Atlanta-based company is testing a program that helps stores, bars and restaurants identify people’s faces through their Facebook accounts, When FaceDeals learn’s user’s face, its camera will recognize them in an instant and send them a customized coupons based on their social media activity) [The-Hated-One, page 27 – 28];
The-Hated-One does not explicitly teach customer profile data of customers. However, Pillai teaches system and method for transmitting content to kiosk after determining future location of user to engage a user and provide high speed data transfer of content to a user device associated with the user. The kiosk may be located in frequently visited locations such as airports, bus stops, train stations, shopping malls, libraries, office buildings, and/or other private or public spaces that are accessible by many users [Pliiai, col. 1, line 65 – col. 2, line 6]. Pillai further recites In some embodiments, the UI module 306 may personalize some of the content displayed on the monitor(s) 108 based on information received from the user or associated with the user, such as attributes of the user, data received from the user device 104 associated with the user such as one or more user profiles, environmental data ( e.g. time of day, destination of upcoming departing vehicles, etc.) [Pillai, col. 6, lines 53 - 60].
Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify The-Hated-One by adopting teachings of Pillai by displaying relevant advertising to user after determining future location of user to engage a user to make a purchase.
The-Hated-One in view of Pillai teaches system and method further comprising:
receiving, by a processor, customer profile [Pillai, col. 6, lines 53 - 60] data associated with a set of customers [The-Hated-One, page 27 – 28];
The-Hated-One in view of Pillai does not explicitly teach vector representation of an image in embeddings of a customer data. However, Taskiran teaches in many-to-one normalization pre-processing approach, the goal is to generate the canonical view of the face image by using face images obtained from different angles in uncontrolled environments. Stacked Autoencoders (SAE) [376], CNN [417]and GAN [328] structures are have been used to obtain a frontal face image using patches from multiple images with different angles [Taskiran, page 8].
Therefore, at the time of filing, it would have been obvious to one of ordinary skill in the art to modify The-Hated-One in view of Pillai by adopting teachings of Taskiran to estimate and use for face alignment, facial landmarks on the face (corners of the eyes, eyebrows, and the mouth, the tip of the nose, etc.).
The-Hated-One in view of Pillai and Taskiran teaches system and method further comprising:
creating, by the processor using the received customer profile data as input into an encoding machine learning model, a set of customer embeddings for the set of customers [Taskiran], each customer embedding including a vector representation of an image of a respective customer from the set of customers [Taskiran] and location and item preferences for the respective customer (Pillai, The optical detection zone 804 may be an area where the image sensor 316 is capable of detecting a prospective user and/or information about the user. The information about the user may include the user's movement direction, a direction the user faces, eye contact with the kiosk 102, and/or other relevant information to detect possible engagement with the kiosk 102.) [Pillai, col. 16, lines 5 – 11];
linking, by the processor, the customer profile data with the set of customer embeddings [Pillai, col. 16, lines 5 – 11];
storing, by the processor, a database comprising the linked customer profile data and customer embeddings in corresponding profiles for each of the set of customers (Pillai, The kiosk 102 may associate an image of the user via the image sensor 316 with a detected device that is detected via the LPWC component 320 in the LPWC zone 806. In various embodiments, this association may be stored in association with the identifier 812 assigned to the user.) [Pillai, col. 16, lines 19 – 24];
receiving, by the processor, sensor fusion data of a customer of the set of customers from a camera capturing a spatial region within a building, the sensor fusion data comprising one or more images of the customer and one or more of video data, audio data, or movement data of the customer at an original location within the spatial region within the building (Pillai, The optical detection zone 804 may be an area where the image sensor 316 is capable of detecting a prospective user and/or information about the user. The information about the user may include the user's movement direction, a direction the user faces, eye contact with the kiosk 102, and/or other relevant information to detect possible engagement with the kiosk 102.) [Pillai, col. 16, lines 5 – 11];
determining, by the processor, at least one of the one or more images match a customer embedding in the set of customer embeddings for the customer stored in the database (The-Hated-One, Let’s say you are standing in front of a smart display and you see an ad. If the AI behind it thinks your facial expressions are mainly positive about the ad and can link your face with your Facebook account, you could be delivered a discount voucher straight to your Facebook inbox.) [The-Hated-One, page 39];
linking, by the processor, the sensor fusion data of the customer with the customer embedding generated for the customer [Pillai, col. 16, lines 5 – 11];
executing, by the processor, a machine learning model using the linked sensor fusion data of the customer to generate a set of realtime customer behaviors or a set of real-time customer attributes (Pillai, The kiosk may employ machine learning to update the process to more effectively engage users based on attributes of users, success/failure instances, environmental factors (e.g., time, location, etc.), and/or other factors.) [Pillai, col. 2, lines 29 – 33]; and
predicting, by the processor, a likely path of movement of a customer from the original location within the spatial region within the building toward a destination location within the spatial region within the building that corresponds to a particular item of item preferences of the customer using the customer embedding and at least one of the set of real-time customer behaviors or the set of real- time customer attributes as input into a path prediction machine learning model (The-Hated-One, here you see a main protagonist walking through a shopping mall while the display ads are calling him out by his name, and relevant ads are displayed on the displays along the path the protagonist is walking) [The-Hated-One, page 5 – 7; also see col. 10, lines 25 – 20; Pillai, col. 17, lines 62 – 67];
selecting, by the processor, a content item that corresponds to the particular item and a display device on the predicted path (Pillai, At 910, the cloud prediction device 120 may transmit a message associated with the predicted future visit by the user to the identified kiosk and/or the content provider 116 to enable custom content for the user at the identified kiosk. For example, the content provider 116 may cause transmission of content predicted to be enjoyable for the user to the identified kiosk prior to the predicted arrival by the user at the kiosk.) [Pillai, col. 18, lines 13 – 20]; and
streaming, by the processor, the selected content item to the selected display device on the predicted path [Pillai, col. 18, lines 13 – 20].
receiving, by a processor, customer profile data associated with a set of customers;
creating, by the processor using the received customer profile data as input into an encoding machine learning model, a set of customer embeddings for the set of customers, each customer embedding including a vector representation of an image of a respective customer from the set of customers and location and item preferences for the respective customer;
linking, by the processor, the customer profile data with the set of customer embeddings;
storing, by the processor, a database comprising the linked customer profile data and customer embeddings in corresponding profiles for each of the set of customers;
Regarding claim 2 and representative claims 11 and 20, as combined and under the same rationale as above, The-Hated-One in view of Pillai and Taskiran teaches system and method further comprising: transmitting, by the processor to an electronic device of at least one customer, a recommendation corresponding to the likely path (Pillai, At 910, the cloud prediction device 120 may transmit a message associated with the predicted future visit by the user to the identified kiosk and/or the content provider 116 to enable custom content for the user at the identified kiosk. For example, the content provider 116 may cause transmission of content predicted to be enjoyable for the user to the identified kiosk prior to the predicted arrival by the user at the kiosk.) [Pillai, col. 18, lines 13 – 20].
Regarding claim 4 and representative claim 13, as combined and under the same rationale as above, The-Hated-One in view of Pillai and Taskiran teaches system and method, wherein the customer profile data comprises at least one of loyalty data, demographic data, or transaction data associated with at least a portion of the set of customers (Pillai, The optical detection zone 804 may be an area where the image sensor 316 is capable of detecting a prospective user and/or information about the user. The information about the user may include the user's movement direction, a direction the user faces, eye contact with the kiosk 102, and/or other relevant information to detect possible engagement with the kiosk 102.) [Pillai, col. 16, lines 5 – 11].
Regarding claim 5 and representative claim 14, as combined and under the same rationale as above, The-Hated-One in view of Pillai and Taskiran teaches system and method, wherein the processor extracts the set of real-time customer attributes using a customer attribute mapping model that detects data associated with an object associated with at least one customer based on an image of the customer (The-Hated-One, A social media platform can track your face as you are typing that comment or text with your friends, and suggest you mood-specific ads. When an app like this throws an ad at your eyeballs, your facial expression can tell the advertiser a lot more about the success of their ad than traditional engagement statistics. Emotional analytics is a way Realeyes, a startup based in London, uses webcams to measure subconscious responses to video content.) [The-Hated-One, page 38].
Regarding claim 6 and representative claim 15, as combined and under the same rationale as above, The-Hated-One in view of Pillai and Taskiran teaches system and method, wherein the processor extracts the set of real-time customer behaviors using a customer behavior mapping model that detects data associated with at least one of an emotion, a dwell time, gazing, a pace, an instance of moving with a group of at least one customer of the set of customers (The-Hated-One, Apps on your phone can use your high definition selfie camera to track your emotional expressions of your face as you listen to music, and suggest you playlists that correspond to your mood. A social media platform can track your face as you are typing that comment or text with your friends, and suggest you mood-specific ads. When an app like this throws an ad at your eyeballs, your facial expression can tell the advertiser a lot more about the success of their ad than traditional engagement statistics. Emotional analytics is a way Real-eyes, a startup based in London, uses webcams to measure subconscious responses to video content.) [The-Hated-One, page 38].
Regarding claim 7 and representative claim 16, as combined and under the same rationale as above, The-Hated-One in view of Pillai and Taskiran teaches system and method, wherein the likely path corresponds to an affinity towards an item (Pillai, The UI module 306 may predict that a user is walking in a direction of a gate with a specific departure ( e.g., an international flight to Italy, etc.), and may then provide a presentation of selections of content related to Italy, for example.) [Pillai, col. 14, lines 18 – 22].
Regarding claim 9 and representative claim 18, as combined and under the same rationale as above, The-Hated-One in view of Pillai and Taskiran teaches system and method, wherein predicting the likely path of the customer from the set of customers toward a location is further based on customer profile data (Pillai, The UI module 306 may predict that a user is walking in a direction of a gate with a specific departure ( e.g., an international flight to Italy, etc.), and may then provide a presentation of selections of content related to Italy, for example.) [Pillai, col. 14, lines 18 – 22].
Response to Arguments
Applicant’s arguments are directed towards amended claimed invention. While performing an updated search, additional prior art were found which have been cited in this office action. Therefore, applicant’s arguments are moot under new grounds of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Naresh Vig whose telephone number is (571)272-6810. The examiner can normally be reached Mon-Fri 06:30a - 04:00p.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ilana Spar can be reached at 571.270.7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NARESH VIG/Primary Examiner, Art Unit 3622
February 19, 2026