Detailed Action
Status of Claims
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Action is in reply to the Amendment filed on 2/19/2026.
Claims 1, 31-41, 43-45, 47-50, and 52 are currently pending and have been examined. Claims 2-30, 42, 46, 51, and 53-92 stand cancelled. Claims 1 and 31 have been amended.
Request for Continued Examination
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/19/2026 has been entered.
Priority
Applicant’s claim of priority to provisional US application 63354582 is acknowledged. The claims are therefore afforded an effective filing date of 6/22/2022.
Claim Objections
Claim 1 is objected to for the following informality: “a user device comprising a hardware processor, and an interface to receive a branded product interaction; and activate, trigger, or present the product interaction at a visual display” should read “a user device comprising a hardware processor, and an interface, wherein the user device is configured to receive a branded product interaction[[;]] and activate, trigger, or present the product interaction at a visual display”
Appropriate correction is required.
Claim Rejection - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 1 is rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claim 1 is directed to a machine. Therefore, claim 1 is directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 1 recites at least the following limitations that are believed to recite an abstract idea:
receive a branded product interaction; and
activate, trigger, or present the product interaction at a visual display, wherein the branded product interaction comprises a visual interaction;
performs measurements or operations relating to the user with the branded product interaction with a specific authenticated branded product;
to transmit the branded product interaction associated with the specific authenticated branded product;
an interaction model, a product model, and/or a user model;
generating the branded product interaction associated with the specific branded product depicted in a rendering of a first user and one or more branded product;
receives a rendering of a first user and a candidate product;
receives input data associated with the rendering including user data, and context data;
determines or receives product type data;
evaluates the rendering and input data using at least two distinct authentication modes selected from tag reading, watermark detection, and vision to determine whether the candidate product is an authentic branded product based on product data, user data, and product data associated with user data, wherein the at least two distinct authentication modes are selected based on the product type;
receives product data associated with an authenticated branded product;
updates a user model and a product model based on real-time feedback from user engagement and authentication events;
determines one or more product interaction types to generate based on the interaction model, branded product data, product type data, and/or context data;
generates one or more product interactions only upon successful authentication using the two distinct authentication modes, the one or more product interactions comprising one or more visual elements of the visual interaction;
evaluates displaying a product interaction indication in association with a representation of the rendering of a first user and one or more branded product;
receives a second user engagement with the representation of the rendering of a first user and one or more branded product; and
provides the one or more product interactions to the user to activate, trigger, or present the product interaction at the visual display for display of the one or more visual elements as part of the product interaction.
The above limitations recite the concept of personalized content. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claim 1 recites an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
A computer system comprising: a user device comprising a hardware processor and an interface; a least one controller; a communication interface; one or more non-transitory memory; and a hardware processor programmed with executable instructions
A trained model
RFID tag reading, BLE tag reading, and computer vision analysis
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
A computer system comprising: a user device comprising a hardware processor and an interface; a least one controller; a communication interface; one or more non-transitory memory; and a hardware processor programmed with executable instructions
A trained model
RFID tag reading, BLE tag reading, and computer vision analysis
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
Claims 31-41, 43-45, 47-50, & 52 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
First, it is determined whether the claims are directed to a statutory category of invention. In the instant case, claims 31-41, 43-45, 47-50, & 52 are directed to a process. Therefore, claims 31-41, 43-45, 47-50, & 52 are directed to statutory subject matter under Step 1 as described in MPEP 2106 (Step 1: YES).
The claims are then analyzed to determine whether the claims are directed to a judicial exception. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong Two of Step 2A).
Claim 31 recites at least the following limitations that are believed to recite an abstract idea:
receiving input data that includes a rendering of a first user and a candidate product;
receiving data related to the rendering of a first user and a candidate product;
identifying the first user associated with the rendering;
determining or receiving product type data;
authenticating based on the rendering and the data, whether the candidate product displayed is a branded product by:
Detecting and reading product identifiers using at least two distinct authentication modes, wherein the at least two distinct authentication modes are selected based on the product type data;
analysing, by matching product features to a branded product data store;
Verifying, product authenticity using a combination of data and image data
retrieving additional data associated with the product; and
generating a branded product interaction to activate, trigger, or present the product interaction.
The above limitations recite the concept of personalized content. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in MPEP 2106, in that they recite commercial interactions, e.g. sales activities/behaviors, and managing personal behavior or relationships or interactions between people, e.g., following rules or instructions. Accordingly, under Prong One of Step 2A, claims 31-41, 43-45, 47-50, & 52 recite an abstract idea (Step 2A, Prong One: YES).
Prong Two of Step 2A is the next step in the eligibility analyses and looks at whether the abstract idea is integrated into a practical application. This requires an additional element or combination of additional elements in the claims to apply, rely on, or user the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the exception.
In this instance, the claims recite the additional elements of:
The method being computer implemented
A hardware processor
Metadata
Sensor data
However, these elements do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
In addition, the recitations are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception.
The dependent claims also fail to recite elements which amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. For example, claims 36-41, 43-44, 47-50, and 52 are directed to the abstract idea itself and do not amount to an integration according to any one of the considerations above.
As for claims 32-35, and 45, these claims are similar to the independent claims except that they recite the further additional elements of sensors, computer vision, an electronic device, a user interface, and “one or more of a 1 D (linear) barcode, 2D barcode, 3D barcode, watermark, microtext, hologram, forensic taggants, a sensor, circuit, printed code, UV code, infrared code, printed smart tag, smart token, RFID technology tag, BLE tag, and/or RTLS tag,” . These additional elements are recited at a high level of generality and also do not amount to an improvement in the functioning of a computer or any other technology or technical field; apply the judicial exception with, or by use of, a particular machine; or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort to monopolize the exception. Therefore, the dependent claims do not create an integration for the same reasons.
Step 2B is the next step in the eligibility analyses and evaluates whether the claims recite additional elements that amount to an inventive concept (i.e., “significantly more”) than the recited judicial exception. According to Office procedure, revised Step 2A overlaps with Step 2B, and thus, many of the considerations need not be re-evaluated in Step 2B because the answer will be the same.
In Step 2A, several additional elements were identified as additional limitations:
The method being computer implemented
A hardware processor
Metadata
Sensor data
These additional limitations, including the limitations in the dependent claims, do not amount to an inventive concept because they were already analyzed under Step 2A and did not amount to a practical application of the abstract idea. Therefore, the claims lack one or more limitations which amount to an inventive concept in the claims.
For these reasons, the claims are rejected under 35 U.S.C. 101.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejection – 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 31-41, 43-45, 47-50, & 52 are rejected under 35 U.S.C. 103 as being unpatentable over Systrom et al (US 20140278998 A1), hereinafter Systrom, in view of Jain et al (US20180154400A1), hereinafter Jain, and further in view of Grabiner et al (US9053616B2) hereinafter Grabiner.
Regarding Claim 1, Systrom discloses a computer system for
providing a user device with a branded product interaction to activate, trigger, or present the branded product interaction at a visual display, the branded product interaction associated with a specific authenticated branded product depicted in a rendering of a first user and one or more branded product (Systrom: “set a hotspot in the image around the identified product.” [0032] – “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]),
the system comprising:
a user device comprising a hardware processor, and an interface (Systrom: [0120]) to receive a branded product interaction; and activate, trigger, or present the product interaction at a visual display, wherein the branded product interaction comprises a visual interaction (Systrom: “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]);
at least one controller that performs measurements or operations relating to the user device with the branded product interaction with a specific authenticated branded product (Systrom: “The systems and methods of the embodiments can be embodied and/or implemented at least in part as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the application,” [0140] – “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image.” [0032]);
a communication interface to transmit the branded product interaction associated with the specific authenticated branded product (Systrom: “The systems and methods of the embodiments can be embodied …as a machine configured to receive a computer-readable medium storing computer-readable instructions. The instructions can be executed by computer-executable components integrated with the …communication interface,” [0140] – “display a visual cue of a hotspot within the image when the image is viewed, by a user, within his personal social feed ” [0090]);
one or more non-transitory memory storing a trained interaction model, a product model, and/or a user model (Systrom: “The computer-readable medium can be stored on any suitable computer readable media such as RAMs, ROMs, flash memory,” [0140] – “Block S120 can implement … other machine vision and/or machine learning technique to automatically identify a product” [0032]);
the hardware processor programmed with executable instructions for generating the branded product interaction associated with the specific branded product depicted in a rendering of a first user and one or more branded product (Systrom: “the computer system can upload a photo and associated tags over a distributed network, such as over the Internet, and one or more processors throughout the distributed network can implement one or more Blocks of first method” [0022] – “Block S250A can … set the first hotspot” [0086]), wherein the hardware processor:
receives a rendering [image] of a first user and a candidate product (Systrom: “the image is an amateur (i.e., unofficial) image, such as a digital photograph captured with a Smartphone and uploaded to the Social networking system, by a user, though a native application executing on the Smartphone …a first private user uploads the image that includes an amateur photograph of a woman on a beach wearing a bathing suit by Brand Y and holding a branded soda can by Brand X.” [0020] – “a first user can capture a photographic image through a camera integrated into a Smartphone and upload the photographic image to his social feed within the social networking system” [0025] – See also Figure 13.);
receives input data associated with the rendering including user data, and context data (Systrom: “Block S110 can upload an amateur candid photograph, from a first user, to the first user's personal Social feed within the Social networking system, and Block S120 can receive a shoe brand tag, for a pair of shoes shown in the image, from the first user, …and a clothing item tag, for a clothing item shown in the image,” [0027] – “Block S120 analyzes images features, exchangeable image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032]);
evaluates the rendering and input data using at least two distinct authentication methods including computer vision analysis, to determine whether the candidate product is an authentic branded product based on product data, user data, and product data associated with user data (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchangeable image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032] – “Block S120 can implement an object image detection algorithm to identify a region of the image associated with a product, brand, designer, store, merchant, model, etc. Block S120 can then automatically generate the tag for the image or prompt the first user to enter or confirm the tag. For example, Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033]);
receives product data associated with an authenticated branded product (Systrom: “Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033] – “associate the hotspot defined in the image through Blocks S120 and S130 with a particular merchant to enable access to merchant-related information through the hotspot.” [0038]);
updates a user model and a product model based on real-time feedback from user engagement and authentication events (Systrom: automatically generate the tag for the image or prompt the first user to enter or confirm the tag. … generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033] – “associate a tag received from a user, brand, etc. with all or a portion of the region to define a hotspot within the image” [0035] – “user profile can also store other information provided by the user, for example, images or videos. Images of users can be tagged with identification information of users …A user profile in the user profile store 804 can also maintain references to actions by the corresponding user performed on content items in the content store” [0125]);
determines one or more product interaction [hotspot] types to generate based on the interaction model, branded product data, product type data, and/or context data (Systrom: “Block S250A can select the particular electronic storefront through which the first user has previously shopped, that carries multiple brands preferred by the first user, that retains past shipping and billing information of the first user, etc. and set the first hotspot to link to the particular electronic storefront.” [0086] – “select the border color and/or border thickness based on the type and/or number of hotspots in the image. For example, Block S180 can apply a two-pixel wide green border to an image with one product tag, a two pixel wide orange border to an image with one brand tag, a four-pixel wide green border to an image with two product tags, and a two-pixel wide orange border inside a two-pixel wide green border to an image with one brand tag and one product tag.” [0067]);
generates one or more product interactions only upon successful authentication using the two distinct authentication modes, the one or more product interactions comprising one or more visual elements of the visual interaction (Systrom: “Generally, Block S130 functions to associate a tag received from a user, brand, etc. with all or a portion of the region to define a hotspot within the image.” [0035] – “Block S250A can select the particular electronic storefront through which the first user has previously shopped, that carries multiple brands preferred by the first user, that retains past shipping and billing information of the first user, etc. and set the first hotspot to link to the particular electronic storefront.” [0086] – With reference to Figure 1, it is recognized that Step S120, the “object image detection,” must happen in order for Step130 to act on the detected object.);
evaluates displaying a product interaction indication in association with a representation of the rendering of a first user and one or more branded product (Systrom: “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]);
receives a second user engagement with the representation of the rendering of a first user and one or more branded product (Systrom: “the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]); and
provides the one or more product interactions to the user device to activate, trigger, or present the product interaction at the visual display for display of the one or more visual elements as part of the product interaction (Systrom: “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020] – The user views the hotspot on the image on a device [0091].).
While Systrom further teaches identifying products using RFID tag reading [0109] along with machine vision [0108], it does not specifically teach that the at least two distinct authentication methods selected from: RFID tag reading, BLE tag reading, watermark detection, and computer vision analysis; determinizing or receiving product type data; or that the at least two distinct authentication modes are selected based on the product type data.
However, Jain teaches systems for identification of items [0004], including that the at least two distinct authentication methods are selected from: RFID tag reading, BLE tag reading, watermark detection, and computer vision analysis (Jain: “various machine-vision techniques can be performed on an image of the first respective item to recognize such, and/or machine-readable information (e.g., … unique identifier wireless read from a … RFID tag) can be read from the first respective item or a label … to recognize such.” [0135]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Systrom would continue to teach evaluating the rendering and input data using at least two distinct authentication methods including computer vision analysis, to determine whether the candidate product is an authentic branded product based on product data, user data, and product data associated with user data, except that now it would also teach including that the at least two distinct authentication methods are selected from: RFID tag reading, BLE tag reading, watermark detection, and computer vision analysis, according to the teachings of Jain. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to automatically recognized item features (Jain: [0036]).
While Systrom/Jain do not teach determinizing or receiving product type data; and that the at least two distinct authentication modes are selected based on the product type data, Grabiner teaches product-authenticity determination methods [Col. 1], including:
determinizing or receiving product type data (Grabiner: “determining by the image capture and communication device a type for each of the one or more environmental monitors on the product label” CLM 1); and
that the at least two distinct authentication modes are selected based on the product type data (Grabiner: “based on the respective determined types of each of the one or more environmental monitors on the product label, analyzing image features of the one or more environmental monitors, the different types of environmental monitors having different image features which are analyzed using different image analysis techniques selected based on the respective type and features of the environmental monitor” CLM 1).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Systrom/Jain would continue to teach evaluates the rendering and input data using at least two distinct authentication modes selected from: RFID tag reading, BLE tag reading, watermark detection, and computer vision analysis, to determine whether the candidate product is an authentic branded product based on product data, user data, and product data associated with user data, except that now it would also teach determinizing or receiving product type data; and that the at least two distinct authentication modes are selected based on the product type data, according to the teachings of Grabiner. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to verify the authenticity of a product (Grabiner: Col. 7).
Regarding Claim 31, Systrom discloses a computer implemented method for
generating output instructions for a product interaction with a branded product to activate, tripper, or present the product interaction (Systrom: “set a hotspot in the image around the identified product.” [0032] – “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]),
the method comprising:
receiving, using a hardware processor, input data that includes a rendering of a first user [image] and a candidate product (Systrom: “the image is an amateur (i.e., unofficial) image, such as a digital photograph captured with a Smartphone and uploaded to the Social networking system, by a user, though a native application executing on the Smartphone …a first private user uploads the image that includes an amateur photograph of a woman on a beach wearing a bathing suit by Brand Y and holding a branded soda can by Brand X.” [0020] – “a first user can capture a photographic image through a camera integrated into a Smartphone and upload the photographic image to his social feed within the social networking system” [0025] – See also Figure 13);
receiving, using the hardware processor, metadata related to the rendering of a first user and a candidate product (Systrom: “Block S110 can upload an amateur candid photograph, from a first user, to the first user's personal Social feed within the Social networking system, and Block S120 can receive a shoe brand tag, for a pair of shoes shown in the image, from the first user, …and a clothing item tag, for a clothing item shown in the image,” [0027] – “Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032]);
identifying, using the hardware processor, the first user associated with the rendering (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032]);
authenticating, using the hardware processor, based on the rendering and the metadata, whether the candidate product displayed is a branded product (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032] – “Block S120 can implement an object image detection algorithm to identify a region of the image associated with a product, brand, designer, store, merchant, model, etc. Block S120 can then automatically generate the tag for the image or prompt the first user to enter or confirm the tag. For example, Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033]) by:
detecting and reading product identifiers using at least two distinct authentication modes (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchangeable image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032] – “Block S120 can implement an object image detection algorithm to identify a region of the image associated with a product, brand, designer, store, merchant, model, etc. Block S120 can then automatically generate the tag for the image or prompt the first user to enter or confirm the tag. For example, Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033]);
analysing, by matching product features to a branded product database (Systrom: “receives a text-based descriptor of a product visible in the image, access a database of template images of a product based on the descriptor, and implement template matching to identify the product in the image. ” [0032]);
verifying, product authenticity using a combination of text and image data (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchangeable image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image. …receives a text-based descriptor of a product visible in the image, access a database of template images of a product based on the descriptor, and implement template matching to identify the product in the image” [0032]);
retrieving additional data associated with the product (Systrom: “Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033] – “associate the hotspot defined in the image through Blocks S120 and S130 with a particular merchant to enable access to merchant-related information through the hotspot.” [0038]); and
generating a branded product interaction to activate, trigger, or present the product interaction (Systrom: “set a hotspot in the image around the identified product.” [0032] – “Block S250A can select the particular electronic storefront through which the first user has previously shopped, that carries multiple brands preferred by the first user, that retains past shipping and billing information of the first user, etc. and set the first hotspot to link to the particular electronic storefront.” [0086] – “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]).
While Systrom further teaches identifying products using RFID tag reading [0109] along with machine vision [0108], it does not specifically teach that the combination of data is a combination of sensor and image data; determinizing or receiving product type data; and that the at least two distinct authentication modes are selected based on the product type data.
However, Jain teaches systems for identification of items [0004], including that the combination of data is a combination of sensor and image data (Jain: “various machine-vision techniques can be performed on an image of the first respective item to recognize such, and/or machine-readable information (e.g., … unique identifier wireless read from a … RFID tag) can be read from the first respective item or a label … to recognize such.” [0135]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Systrom would continue to teach verifying, product authenticity using a combination of text and image data, except that now it would also teach that the combination of data is a combination of sensor and image data, according to the teachings of Jain. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to automatically recognized item features (Jain: [0036]).
While Systrom/Jain do not teach determinizing or receiving product type data; and that the at least two distinct authentication modes are selected based on the product type data, Grabiner teaches product-authenticity determination methods [Col. 1], including:
determinizing or receiving product type data (Grabiner: “determining by the image capture and communication device a type for each of the one or more environmental monitors on the product label” CLM 1); and
that the at least two distinct authentication modes are selected based on the product type data (Grabiner: “based on the respective determined types of each of the one or more environmental monitors on the product label, analyzing image features of the one or more environmental monitors, the different types of environmental monitors having different image features which are analyzed using different image analysis techniques selected based on the respective type and features of the environmental monitor” CLM 1).
It would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because the results would be predictable. Specifically, Systrom/Jain would continue to teach evaluates the rendering and input data using at least two distinct authentication modes selected from: RFID tag reading, BLE tag reading, watermark detection, and computer vision analysis, to determine whether the candidate product is an authentic branded product based on product data, user data, and product data associated with user data, except that now it would also teach determinizing or receiving product type data; and that the at least two distinct authentication modes are selected based on the product type data, according to the teachings of Grabiner. This is a predictable result of the combination.
In addition, it would have been obvious to one of ordinary skill in the art before the effective filing date of invention to combine these references because it would result in an improved ability to verify the authenticity of a product (Grabiner: Col. 7).
Regarding Claim 32, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising performing measurements using one or more sensors for receiving the input depicting the first user and the one or more displayed candidate product (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032]).
Regarding Claim 33, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising transmitting control signals to one or more sensors to perform measurements using the one or more sensors for receiving the input depicting the first user and the one or more displayed candidate products (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032]).
Regarding Claim 34, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising using computer vision to extract product data from image or video data received as the input to authenticate whether the one or more displayed candidate product is associated with the branded product to determine one or more authenticated displayed product (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032]).
Regarding Claim 35, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising causing an electronic device to display the product interaction as a visualization on a user interface as part of generating the branded product interaction (Systrom: “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020] –See Figure 13.).
Regarding Claim 36, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 the method further comprising evaluating whether the rendering of a first user and a candidate product contains a sufficient visible portion of the candidate product to identify whether the candidate product is an authentic branded product (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image.” [0032] – “Block S150B can filter out images of a quality below a threshold quality, … or images that fail to meet any other criteria. Block S150B can also implement machine vision and/or machine learning techniques to identify and filter out unsuitable or less desirable images.” [0047]).
Regarding Claim 37, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 wherein the candidate product is an item of apparel (Systrom: “uploads the image that includes an amateur photograph of a woman on a beach wearing a bathing suit by Brand Y and holding a branded soda can by Brand X.” [0020] – “a pair of shoes shown in the image, …a clothing item shown in the image,” [0027]).
Regarding Claim 38, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 the method further comprising identifying whether there are more than one candidate products within the rendering of a first user and a candidate product and if there are more than one candidate products authenticating each of the candidate products displayed (Systrom: “Block S110 can upload an amateur candid photograph, from a first user, to the first user's personal Social feed within the Social networking system, and Block S120 can receive a shoe brand tag, for a pair of shoes shown in the image, from the first user, a Soda brand tag and a product tag for a soda can, shown in the image, from a second user, a vehicle manufacturer tag for a vehicle, shown in the image, from a third user, and a clothing item tag, for a clothing item shown in the image, from a fourth user.” [0027] – “select the border color and/or border thickness based on the type and/or number of hotspots in the image. For example, Block S180 can apply a two-pixel wide green border to an image with one product tag, a two pixel wide orange border to an image with one brand tag, a four-pixel wide green border to an image with two product tags, and a two-pixel wide orange border inside a two-pixel wide green border to an image with one brand tag and one product tag.” [0067] – See Figure 13.).
Regarding Claim 39, Systrom/Jain/Grabiner teach the computer implemented method of claim 38 further comprising generating a product interaction that combines each of the more than one candidate product in a combined product interaction (Systrom: “Block S110 can upload an amateur candid photograph, from a first user, to the first user's personal Social feed within the Social networking system, and Block S120 can receive a shoe brand tag, for a pair of shoes shown in the image, from the first user, a Soda brand tag and a product tag for a soda can, shown in the image, from a second user, a vehicle manufacturer tag for a vehicle, shown in the image, from a third user, and a clothing item tag, for a clothing item shown in the image, from a fourth user.” [0027] – “select the border color and/or border thickness based on the type and/or number of hotspots in the image. For example, Block S180 can apply a two-pixel wide green border to an image with one product tag, a two pixel wide orange border to an image with one brand tag, a four-pixel wide green border to an image with two product tags, and a two-pixel wide orange border inside a two-pixel wide green border to an image with one brand tag and one product tag.” [0067] – See Figure 13.).
Regarding Claim 40, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 wherein identifying, using the hardware processor, the first user associated with the rendering further comprises identifying an anonymous user instance and validating the product authenticity based on specific aspects related to the product (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image.” [0032] – “Block S120 can implement an object image detection algorithm to identify a region of the image associated with a product, brand, designer, store, merchant, model, etc. Block S120 can then automatically generate the tag for the image or prompt the first user to enter or confirm the tag. For example, Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033]).
Regarding Claim 41, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprises: receiving one or more of a user model, and product model, and/or a retail model; evaluating the candidate product against one or more of a product model, a user model, and/or pre- authenticated branded product metadata associated with user data to authenticate whether the candidate product is a branded product (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image. In another example, Block S120 receives a text-based descriptor of a product visible in the image, access a database of template images of a product based on the descriptor, and implement template matching to identify the product in the image.” [0032] – “Block S120 can implement an object image detection algorithm to identify a region of the image associated with a product, brand, designer, store, merchant, model, etc. Block S120 can then automatically generate the tag for the image or prompt the first user to enter or confirm the tag. For example, Block S120 can generate a set of potential tags for the image based on the object image detection algorithm, and the method can prompt the first user to select a preferred tag or a proper match for the image from the set of potential tags.” [0033]).
Regarding Claim 43, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising evaluating whether the authentic branded product is available in a specific region, gender designation, market, color, size, pattern, version, membership level, and/or season (Systrom: “Block S160A can selectively callout a Subset of tagged items in the image. Such as based on user purchasing history, user location and stock at a nearby brick and-mortar retail location, a perceived user interest, a user demographic, etc.,” [0056] – “Once the location of the user is determined, Block S160B can select a local retailer that is suitably close to the user, such as a retailer that is within five mile” [0062]).
Regarding Claim 44, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 where authenticating, using the hardware processor, based on the rendering and the metadata, whether the candidate product is a branded product, further comprises evaluating the candidate product against one or more of a product data model, and/or a pre-authenticated branded product metadata associated with user data (Systrom: “Block S120 can implement object recognition, character recognition, template matching, edge detection, and/or any other machine vision and/or machine learning technique to automatically identify a product or brand represented in the image. In one example, Block S120 analyzes images features, exchange able image file format (exif) data of the image, location data, Social context (e.g., user check-ins), and any other relevant image meta to generate the tag for the image. In another example, Block S120 receives a text-based descriptor of a product visible in the image, access a database of template images of a product based on the descriptor, and implement template matching to identify the product in the image.” [0032]).
Regarding Claim 45, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising instructions for authenticating the candidate product is a branded product by detecting and reading one or more of a 1 D (linear) barcode, 2D barcode, 3D barcode, watermark, microtext, hologram, forensic taggants, a sensor, circuit, printed code, UV code, infrared code, printed smart tag, smart token, RFID technology tag, BLE tag, and/or RTLS tag within, attached to, or on the candidate product (Systrom: “The tag received in Block S120 can also include a product or brand description, name, Stock keeping unit (SKU) number, bar code, or other identifier of the product or brand. In this implementation, Block S120 can analyze the tag (e.g., key word extraction) to extract a brand or product identifier from the tag and then attach a link or pointer to a respective region of the image based on the identifier extracted from the tag.” [0029] – “Block S120 receives a text-based descriptor of a product visible in the image, access a database of template images of a product based on the descriptor, and implement template matching to identify the product in the image.” [0032]).
Regarding Claim 47, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising instructions to add to a graphical representation associated with the first user input an indicator that there are one or more product interactions associated with the first user input (Systrom: “set a hotspot in the image around the identified product.” [0032] – “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020] – See Figure 13.).
Regarding Claim 48, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 47 the method further comprising receiving an input from a second user engaging with the interaction indicator and providing the product interaction to the second user (Systrom: “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]).
Regarding Claim 49, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 the method further comprising receiving metadata associated with a second user the metadata comprising one or more of a user region, user size, user purchase history, user wishlist, user viewed product list, user activity, user activity history, user planned activity, user preference associated with color, feel state, activity, and/or fabric type (Systrom: “Block S120 can also receive multiple tags from one or more users or brands. For example, Block S110 can upload an amateur candid photograph, from a first user, to the first user's personal Social feed within the Social networking system, and Block S120 can receive a shoe brand tag, for a pair of shoes shown in the image, from the first user, a Soda brand tag and a product tag for a soda can, shown in the image, from a second user, a vehicle manufacturer tag for a vehicle, shown in the image, from a third user, and a clothing item tag, for a clothing item shown in the image, from a fourth user.” [0027] – “BlockS250A selects the first link for the first hotspot substantially in real time. … Block S250A retrieves user data stored by the Social networking system and implements the user data to select the electronic store that is particularly relevant to the first user. For example, Block S250A can analyze user transaction history, …browsing history, … user shopping trends, …and/or shipping preferences, etc.” [0086]).
Regarding Claim 50, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 further comprising instructions to determine a substitute branded product based on one or more of the authenticated displayed product availability in an inventory, availability in a preferred region, availability in a preferred size, availability in a preferred option, availability for a preferred gender, and/or an updated version associated with the branded product (Systrom: “Block S370 can similarly communicate additional product information, such as …information or locations of local merchants that carry the product, similar products by other brands, local or online sale offers for the product, another local merchant offering the product for sale and a local or overage product price, other products that complement the product (e.g., shoes that pair well with a pair of pants),” [0112] – “see how the product fits, or see what other products, styles, or accessories function with or complement the product.” [0100] – “Block S250A can analyze … user shopping trends, a user interest, other brands or products of interest to the first user (e.g., another product which the first user may purchase with the first product), …From this information, Block S250A can select a particular electronic storefront, from a list of available or preferred electronic storefronts, that is particularly relevant to the first user.” [0086]).
Regarding Claim 52, Systrom/Jain/Grabiner teach the computer implemented method for generating output instructions for a product interaction with a branded product of claim 31 wherein the product interaction is one or more of adding the displayed branded product to a view history, adding the displayed branded product to a customized storefront, adding identification values associated with the displayed branded product to an incentive system, displaying a closet tour associated with the branded product, displaying environmental and/or sustainability factors associated with a branded product, displaying a 360 degree view of the product, displaying an interactive simulation of the product, displaying a pop-up with additional product details for the displayed branded product, providing a link to purchase the displayed branded product, providing a link to receive a special offer related to the displayed branded product, displaying an interactive product site for the displayed branded product, providing a link to related products for the displayed branded product, providing a link to other products related to the user associated with the displayed branded product, displaying activities associated with the displayed branded product, displaying activities associated with the first user associated with the displayed branded product, displaying communities associated with the displayed branded product, and/or displaying communities associated with the first user associated with the displayed branded product (Systrom: “set a hotspot in the image around the identified product.” [0032] – “Block S250A can select the particular electronic storefront through which the first user has previously shopped, that carries multiple brands preferred by the first user, that retains past shipping and billing information of the first user, etc. and set the first hotspot to link to the particular electronic storefront.” [0086] – “when a third user clicks on the first portion of the image, Block S160A can direct the third user to a social feed of Brand X within the social networking system, and, when the third user clicks on the second portion of the image, Block S160B can direct the third user to an online store in which the third user can order an identical or similar swimsuit.” [0020]).
Response to Arguments
Applicant's arguments filed 2/19/2026 have been fully considered but they are not persuasive.
Claim Rejection – 35 USC §101
Applicant argues that “the claims are directed to a computer system and a computer implemented method. The computer components are thus inextricable tied to the claimed subject matter. These are not methods of organizing human activity because they are carried out or implemented by hardware components of a computer.”
Examiner disagrees. With reference to the rejection above, the claims recite steps which amount to a concept for generating personalized content. This concept falls within Certain Methods of Organizing Human Activity in that it recites commercial interactions and managing personal behavior or relationships or interactions between people, similar to the examples provided in MPEP 2106.04. The mere presence of computer-related additional elements, such as a recitation that the method is computer-implemented as argued, does not preclude the claims from reciting an abstract idea.
Applicant further argues that “the pending subject matter employs any information derived from judicial exceptions to operate the readings or analyses based on the product type data,” and argues that “this can operate to improve computational efficiency and power load and can improve the accuracy of product authentication.”
Examiner disagrees. The argued ability to select authentication methods based on determined or received product type data is part of the abstract idea itself, such that the alleged improvement is at best a business improvement stemming solely from the abstract idea. The additional elements, rather than improving the functioning of a computer, are invoked as mere instructions to apply the abstract idea to a technological environment, creating only a general linking to computer technology [MPEP 2106.05(f)].
Applicant further argues that “the claims as a whole amount to significantly more than” the abstract idea, arguing that, similar to the discussion above with respect to Prong 2, the claims “can be used to efficiently and accurately generate the branded product interactions for user.” Applicant also argues that certain prior art references do not teach limitations of the claims and suggests this impacts the 101 analysis.
Examiner disagrees, and notes with reference to MPEP 2106.05 that “Because they are separate and distinct requirements from eligibility, patentability of the claimed invention under 35 U.S.C. 102 and 103 with respect to the prior art is neither required for, nor a guarantee of, patent eligibility under 35 U.S.C. 101.” As addressed above, the argued ability to select authentication methods based on determined or received product type data is part of the abstract idea itself, such that the alleged improvement is at best a business improvement stemming solely from the abstract idea. The additional elements, rather than improving the functioning of a computer, are invoked as mere instructions to apply the abstract idea to a technological environment, creating only a general linking to computer technology [MPEP 2106.05(f)].
Claim Rejection – 35 USC §103
Applicant argues that Systrom & Jain fail to teach “determines or receives product type data; evaluates the rendering and input data using at least two distinct authentication modes selected from: RFID tag reading, BLE tag reading, watermark detection, and computer vision analysis, to determine whether the candidate product is an authentic branded product based on product data, user data, and product data associated with user data, wherein the at least two distinct authentication modes are selected based on the product type data.”
Examiner partially disagrees. With reference to the rejection above, Systrom teaches receiving an image capturing a person and a garment [0020], and user input regarding the product and additional social data [0027, 0032]. This data is evaluated using two or more techniques, including “machine vision” to identify a product in the image and generate a tag for the image [0032-0033]. Potential tags are presented to a user for confirmation and selection [0033]. This user selection/feedback causes an update to the image file with a hotspot [0125], and user interactions are recorded in their profile [0035].
However, while Systrom teaches identifying products using RFID tag reading [0109] along with machine vision [0108], Jain is relied upon to teach that RFID tags can be read for information, in conjunction with machine-vision techniques performed on an image, to recognize the item therein [0135]. Furthermore, newly relied-upon reference Grabiner is used to teach that determining a type of a product in an image and based on the determined type, analyzing the features of the image using different image analysis techniques [CLM 1].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Avedissian et al (US 20160205431 A1) teaches systems for augmenting videos with information and purchase links based on recognized products.
Hedges et al (US 20100241528 A1) teaches systems of verifying authenticity of listed items on a website.
(US 11861528 B1) teaches systems for detecting products that infringe on authentic designs.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J SULLIVAN whose telephone number is (571)272-9736. The examiner can normally be reached Mon - Fri 8-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached on (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.J.S./Examiner, Art Unit 3689
/MARISSA THEIN/Supervisory Patent Examiner, Art Unit 3689