DETAILED ACTION
This Office Action is in response to the original application filed on 04/02/2025. Claims 1-20 are pending in the application, of which, claims 1, 15, and 20 are presented in independent form.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
This application is a continuation that claims the benefit of U.S. Patent Application No. 18/912,965 filed on 10/11/2024, which has since been issued as U.S. Patent No. 12,411,904, which claims the benefit of U.S. Patent Application No. 18/653,092 filed on 05/02/2024, which has since been issued as U.S. Patent No. 12,141,220, which claims the benefit of U.S. Patent Application No. 18/503,794 filed on 11/07/2023, which has since been issued as U.S. Patent No. 12,008,062, which claims the benefit of U.S. Patent Application No. 18/169,488 filed on 02/15/2023, which has since been issued as U.S. Patent No. 11,836,208, which claims the benefit of U.S. Patent Application No. 17/341,573 filed on 06/08/2021, which has since been issued as U.S. Patent No. 11,704,381, which claims the benefit of U.S. Patent Application No. 16/989,573 filed on 08/10/2020, which has since been issued as U.S. Patent No. 11,468,113, which claims the benefit of U.S. Patent Application No. 15/344,342 filed on 11/04/2016, which has since been issued as U.S. Patent No. 10,740,387, which claims the benefit of U.S. Patent Application No. 15/211,321 filed on 07/15/2016, which has since been issued as U.S. Patent No. 10,423,657, which claims the benefit of U.S. Provisional Patent Application No. 62/192,873 filed on 07/15/2015.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 04/02/2025 and 10/27/2025 were filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Drawings
The original drawings submitted on 04/02/2025 are accepted.
Specification
The original specification submitted on 04/02/2025 are accepted.
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because of the following:
Uses the phrase " Systems and methods are disclosed for…" such phrases should be avoided.
Correction is required. See MPEP § 608.01(b).
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/ patents/apply/applying-online/eterminal-disclaimer
Claims 1, 15, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,411,904 in view of Sunkada (U.S. Pub. No. 2014/0089145, cited in IDS). See the table below for the double patenting obviousness analysis.
It would have been obvious to one of ordinary skill in the art at the time the invention to modify U.S. Patent No. 12,411,904 to incorporate the teachings of Sunkada because both address the same field of image based product search systems, and by incorporating Sunkada provides a product marketing and/or shopping platform having improved and/or new tools for marketing products or services, which provide consumers with improved convenience and/or options for shopping for products or services, as taught by Sunkada [0017].
Present Application 19/098,169
U.S. Pat. No. 12,411,904
Analysis
1. A computer-implemented method for providing image-based content recommendations, the method comprising:
receiving one or more of an image of a product or text associated with the product included within image data associated with a client device;
determining, using at least one machine learning model, a plurality of content recommendations associated with the product based on the one or more of the image of the product or the text associated with the product;
causing to be displayed, at a display of the client device, a graphical user interface including the plurality of content recommendations;
receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; and adapting the at least one machine learning model based on the feedback.
1. A method for providing item recommendations, the method comprising:
receiving an image from a computing device associated with a user;
identifying one or more of an item or text associated with the item included in the image;
determining, using at least one machine learning model, an item recommendation based on a plurality of images and the identified one or more of the item or the text, wherein the item recommendation corresponds to one of the plurality of images that includes a matching item;
providing the item recommendation to the computing device for display,
Not claimed.
Both methods for recommendations.
Both receive an image with an item or text from a device.
Both use learning models to make recommendations based on image of an item or text.
Both display recommendations in a GUI to a user.
Patent does not claim the limitation. However, the limitation could be rejected with prior art reference Sunkada under nonstatutory obviousness-type double patenting. Specifically, Sunkada, [0034], [0038], and [0074]. Refer to the 35 U.S.C. 103 rejection for more details.
Claims 15 and 20 are essentially just a different statutory category of the same claim.
Claims 1, 15, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,141,220. Although the claims at issue are not identical, they are not patentably distinct from each other because of the mapping presented below.
Present Application 19/098,169
U.S. Pat. No. 12,141,220
Analysis
1. A computer-implemented method for providing image-based content recommendations, the method comprising:
receiving one or more of an image of a product or text associated with the product included within image data associated with a client device;
determining, using at least one machine learning model, a plurality of content recommendations associated with the product based on the one or more of the image of the product or the text associated with the product;
causing to be displayed, at a display of the client device, a graphical user interface including the plurality of content recommendations;
receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; and
adapting the at least one machine learning model based on the feedback.
1. A method for providing item recommendations, the method comprising:
receiving an image from a computing device associated with a user;
identifying one or more of an item or text associated with the item included in the image;
determining, using at least one machine learning model, an item recommendation based on a plurality of images and the identified one or more of the item or the text, wherein the item recommendation corresponds to one of the plurality of images that includes a matching item;
providing the item recommendation to the computing device for display, the item recommendation including the one of the plurality of images that includes the matching item; and
adapting the at least one machine learning model based on feedback associated with the item recommendation that is received based on interactions of the user with the item recommendation displayed on the computing device.
Both methods for recommendations.
Both receive an image with an item or text from a device.
Both use learning models to make recommendations based on image of an item or text.
Both display recommendations in a GUI to a user.
Both adapt the machine learning model using received feedback user interactions.
Claims 15 and 20 are essentially just a different statutory category of the same claim.
Claims 1,15, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,008,062. Although the claims at issue are not identical, they are not patentably distinct from each other because of the mapping presented below.
Present Application 19/098,169
U.S. Pat. No. 12,008,062
Analysis
1. A computer-implemented method for providing image-based content recommendations, the method comprising:
receiving one or more of an image of a product or text associated with the product included within image data associated with a client device;
determining, using at least one machine learning model, a plurality of content recommendations associated with the product based on the one or more of the image of the product or the text associated with the product;
causing to be displayed, at a display of the client device, a graphical user interface including the plurality of content recommendations;
receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; and
adapting the at least one machine learning model based on the feedback.
1. A method for providing product recommendations to a user, the method comprising:
receiving, from an application executing on a client computing device of the user, an image displayed on the client computing device;
identifying a product included in the image;
determining, using at least one machine learning model, a product recommendation based on a plurality of images and the identified product, wherein the product recommendation corresponds to an image from the plurality of images that includes a matching product to the identified product;
providing the product recommendation to the application for display on the client computing device, the product recommendation including the image from the plurality of images that includes the matching product; and
adapting the at least one machine learning model based on feedback associated with the product recommendation that is received based on interactions of the user with the product recommendation displayed on the client computing device.
Both methods for recommendations.
Both receive an image with an item on a device.
Both use learning models to make recommendations based on image of an item or text.
Both display recommendations in a GUI to a user.
Both adapt the machine learning model using received feedback user interactions.
Claims 15 and 20 are essentially just a different statutory category of the same claim.
Claims 1, 5-8, 10-16, and 18-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 4, 5, 8, and 10-12 of U.S. Patent No. 11,836,208. Although the claims at issue are not identical, they are not patentably distinct from each other because of the mapping presented below.
Present Application 19/098,169
U.S. Pat. No. 11,836,208
Analysis
1. A computer-implemented method for providing image-based content recommendations, the method comprising:
receiving one or more of an image of a product or text associated with the product included within image data associated with a client device;
determining, using at least one machine learning model, a plurality of content recommendations associated with the product based on the one or more of the image of the product or the text associated with the product;
causing to be displayed, at a display of the client device, a graphical user interface including the plurality of content recommendations;
receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; and
adapting the at least one machine learning model based on the feedback.
1. A computer-implemented method for analyzing images to display a graphical user interface, comprising:
detecting, by an application executing on a client computing device associated with a user, an image displayed on a display of the client computing device;
detecting a product in the detected image;
identifying, using at least one machine learning model, a plurality of product recommendations based on the detected product,
causing to be displayed, at the display of the client computing device, a graphical user interface (GUI), wherein a first portion of the GUI includes the image, the first subset of the plurality of product recommendations, and one or more selectable items displayed over the image, and wherein a second portion of the GUI includes the second subset of the plurality of product recommendations;
receiving feedback associated with at least one of the plurality of product recommendations from interactions of the user with the GUI displayed on the display of the client computing device; and adapting the at least one machine learning model based on the feedback.
Both methods for image based systems.
Both receive a product image on a device.
Both use learning models to make recommendations based on image of an item or text.
Both display recommendations in a GUI to a user.
Both adapt the machine learning model using received feedback user interactions.
Claims 15 and 20 are essentially just a different statutory category of the same claim.
5. The computer-implemented method of claim 1, wherein the at least one content recommendation of the plurality of content recommendations included in the graphical user interface and displayed, at the display of the client device, includes a link to a resource, and the client device is caused to access the resource through the link in response to receiving a selection associated with the at least one content recommendation via the graphical user interface.
4. The computer-implemented method of claim 1, further comprising determining a link associated with one or more of the plurality of product recommendations, wherein the displayed GUI includes the link.
5. The computer-implemented method of claim 4, further comprising retrieving graphical content associated with the link, wherein the displayed GUI includes the retrieved graphical content.
Both access content/resources associated to an item via link in a GUI.
6. The computer-implemented method of claim 5, wherein the selection is at least one of the one or more user interactions received as feedback.
1. receiving feedback associated with at least one of the plurality of product recommendations from interactions of the user with the GUI displayed on the display of the client computing device;
Both treat the user interaction with an item recommendation in the GUI as feedback.
7. The computer-implemented method of claim 1, wherein the determining, using the at least one machine learning model, the plurality of content recommendations associated with the product further comprises:
determining at least one product category for the product, wherein the plurality of content recommendations associated with the product are further determined based on the at least one product category.
8. The computer-implemented method of claim 1, further comprising analyzing the image to determine at least one product category for the detected product, wherein identifying the plurality of product recommendations is based on the determined product category.
Both identify product recommendations based on a determined product category.
8. The computer-implemented method of claim 1, wherein the determining, using the at least one machine learning model, the plurality of content recommendations associated with the product further comprises:
determining, using the at least one machine learning model, the plurality of content recommendations based on a plurality of images of a plurality of products and the one or more of the image of the product or the text associated with the product, wherein the plurality of content recommendations are associated with a subset of the plurality of images each identified as including a matching product to the product.
1. detecting a product in the detected image; identifying, using at least one machine learning model, a plurality of product recommendations based on the detected product, wherein a first subset of the plurality of product recommendations is associated with the detected product and a second subset of the plurality of product recommendations is associated with one or more products similar to the detected product;
10. The computer-implemented method of claim 1, wherein the first subset of the plurality of product recommendations includes a product identical to the detected product in the image.
Both use machine learning model to identify product recommendations based on products in an image. Both have a subset of product recommendations that are identical matches to the product image.
Claim 19 is essentially just a different statutory category of the same claim.
10. The computer-implemented method of claim 8, wherein the matching product is one of an identical product to the product or a similar product to the product.
1. detecting a product in the detected image; identifying, using at least one machine learning model, a plurality of product recommendations based on the detected product, wherein a first subset of the plurality of product recommendations is associated with the detected product and a second subset of the plurality of product recommendations is associated with one or more products similar to the detected product;
10. The computer-implemented method of claim 1, wherein the first subset of the plurality of product recommendations includes a product identical to the detected product in the image.
Both use machine learning model to identify product recommendations based on products in an image. Both have subsets of product recommendations that are identical matches to the product image or similar to the product image.
11. The computer-implemented method of claim 1, wherein a first subset of the plurality of content recommendations include content associated with an identical product to the product and a second subset of the plurality of content recommendations include content associated with one or more products similar to the product.
1. detecting a product in the detected image; identifying, using at least one machine learning model, a plurality of product recommendations based on the detected product, wherein a first subset of the plurality of product recommendations is associated with the detected product and a second subset of the plurality of product recommendations is associated with one or more products similar to the detected product;
10. The computer-implemented method of claim 1, wherein the first subset of the plurality of product recommendations includes a product identical to the detected product in the image.
Both have subsets of product recommendations that are identical matches to the product image or similar to the product image.
12. The computer-implemented method of claim 11, wherein a first portion of the graphical user interface includes the first subset of the plurality of content recommendations, and a second portion of the graphical user interface includes the second subset of the plurality of content recommendations.
1. causing to be displayed, at the display of the client computing device, a graphical user interface (GUI), wherein a first portion of the GUI includes the image, the first subset of the plurality of product recommendations, and one or more selectable items displayed over the image, and wherein a second portion of the GUI includes the second subset of the plurality of product recommendations;
Both display the first and second subsets of product recommendations.
13. The computer-implemented method of claim 1, wherein the graphical user interface caused to be displayed, at the display of the client device, further includes the image data.
1. causing to be displayed, at the display of the client computing device, a graphical user interface (GUI), wherein a first portion of the GUI includes the image, the first subset of the plurality of product recommendations, and one or more selectable items displayed over the image, and wherein a second portion of the GUI includes the second subset of the plurality of product recommendations;
Both display the image in the GUI.
14. The computer-implemented method of claim 1, wherein the image data includes one of an image or a screenshot.
11. The computer-implemented method of claim 1, wherein the image is taken using a camera associated with the client computing device.
12. The computer-implemented method of claim 1, wherein the image is included in a gallery of images on the client computing device, and the application is configured to access the gallery.
Both have image data that is a screenshot or image.
16. The system of claim 15, wherein one of the client device or the at least one processor identifies the one or more of the image of the product or the text associated with the product included within the image data.
1. detecting a product in the detected image;
Both identify that a product is in the image.
18. The system of claim 15, wherein the at least one content recommendation of the plurality of content recommendations included in the graphical user interface and displayed, at the display of the client device, includes a link to a resource, the client device is caused to access the resource through the link in response to receiving a selection associated with the at least one content recommendation via the graphical user interface, and the selection is at least one of the one or more user interactions received as feedback.
1. causing to be displayed, at the display of the client computing device, a graphical user interface (GUI), wherein a first portion of the GUI includes the image, the first subset of the plurality of product recommendations, and one or more selectable items displayed over the image, and wherein a second portion of the GUI includes the second subset of the plurality of product recommendations; receiving feedback associated with at least one of the plurality of product recommendations from interactions of the user with the GUI displayed on the display of the client computing device;
4. The computer-implemented method of claim 1, further comprising determining a link associated with one or more of the plurality of product recommendations, wherein the displayed GUI includes the link.
5. The computer-implemented method of claim 4, further comprising retrieving graphical content associated with the link, wherein the displayed GUI includes the retrieved graphical content.
Both have links in the GUI associated with the product recommendations displayed and user interaction with the product recommendations (e.g. retrieving content on product recommendation via link) is received as feedback.
Claims 1, 15, and 20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of copending Application No. 19/298,413 in view of Sunkada (U.S. Pub. No. 2014/0089145, cited in IDS). See the table below for the double patenting obviousness analysis.
It would have been obvious to one of ordinary skill in the art at the time the invention to modify copending Application No. 19/298,413 to incorporate the teachings of Sunkada because both address the same field of image based product search systems, and by incorporating Sunkada provides a product marketing and/or shopping platform having improved and/or new tools for marketing products or services, which provide consumers with improved convenience and/or options for shopping for products or services, as taught by Sunkada [0017].
This is a provisional nonstatutory double patenting rejection.
Present Application 19/098,169
Co-Pending App. 19/298,413
Analysis
1. A computer-implemented method for providing image-based content recommendations, the method comprising:
receiving one or more of an image of a product or text associated with the product included within image data associated with a client device;
determining, using at least one machine learning model, a plurality of content recommendations associated with the product based on the one or more of the image of the product or the text associated with the product;
causing to be displayed, at a display of the client device, a graphical user interface including the plurality of content recommendations;
receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; and adapting the at least one machine learning model based on the feedback.
1. A method for providing item recommendations, the method comprising:
receiving an image from a computing device associated with a user; identifying one or more of an item or text associated with the item included in the image;
determining, using at least one machine learning model, a recommended item based on information associated with a plurality of items stored in a database and the one or more of the item or the text, wherein the recommended item is one of the plurality of items stored in the database determined to match the one or more of the item or the text;
providing the recommended item to the computing device for display and selection in association with content,
Not claimed.
Both methods for recommendations.
Both receive an image with an item or text from a device.
Both use learning models to make recommendations based on image of an item or text.
Both display recommendations in a GUI to a user.
Patent does not claim the limitation. However, the limitation could be rejected with prior art reference Sunkada under nonstatutory obviousness-type double patenting. Specifically, Sunkada, [0034], [0038], and [0074]. Refer to the 35 U.S.C. 103 rejection for more details.
Claims 15 and 02 are essentially just a different statutory category of the same claim.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 and 8-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cheung et al. (U.S. Pub. No. 2015/0235297, cited in IDS), hereinafter Cheung, in view of Sunkada (U.S. Pub. No. 2014/0089145, cited in IDS).
Regarding independent claim 1, Cheung teaches a computer-implemented method for providing image-based content recommendations, the method comprising: (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise.")
receiving one or more of an image of a product or text associated with the product included within image data associated with a client device; (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise (i.e. product of interest), retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included.)
determining, using at least one machine learning model, a plurality of content recommendations associated with the product based on the one or more of the image of the product or the text associated with the product; (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0031], discloses finding a potential match for the candidate merchandise according to the identifying information. Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included by taking extracted features of the screenshot as input in a machine learning approach to be compared to screenshot images in a training dataset.)
causing to be displayed, at a display of the client device, a graphical user interface including the plurality of content recommendations; (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot and/or other factors.)
However, Cheung does not explicitly teach receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; and
adapting the at least one machine learning model based on the feedback.
On the other hand, Sunkada teaches receiving, as feedback, one or more user interactions with at least one content recommendation of the plurality of content recommendations included in the graphical user interface displayed on the display of the client device; (Sunkada, [0034] and [0038], discloses product subsystem with a data storage facility that maintains product data, product image data, search results data, user profile data, and any other data. The user profile data may include any data associated with profiles and/or preferences of users accessing system and historical data associated with users. Examiner interprets user profile data and historical data associated with users to be feedback. Sunkada, [0074], discloses logging data associated with a transaction (e.g. a purchase of a product) to be stored and used to ascertain consumption patterns (i.e. user behavior data) for consumers, products, types of products, geographic regions, etc. System also logs and maintains historical data representative of product search requests initiated by consumers, search results data generated in response to the requests, and transaction histories of consumers. Examiner interprets logging data associated with a transaction (e.g. a purchase of a product) to be user interactions with a content recommendation.) and
adapting the at least one machine learning model based on the feedback. (Sunkada, [0034] and [0038], discloses product subsystem with a data storage facility that maintains product data, product image data, search results data, user profile data, and any other data. The user profile data may include any data associated with profiles and/or preferences of users accessing system and historical data associated with users. Examiner interprets user profile data and historical data associated with users to be feedback. The product subsystem can utilize the user profiles to selectively provide product marketing (i.e. personalized product recommendation), search, and/or shopping tools. Examiner interprets utilizing the user profiles to selectively provide product marketing (i.e. personalized product recommendation), search, and/or shopping tools to be adapting the at least one machine learning model based on the feedback.)
Sunkada, [0017], discloses a consumer utilizing an access device having a camera to capture an image of a product (e.g., a photograph of a product) on the fly and the image of the product may be used to search a repository of product image data to identify one or more product images that match the image of the product. The image analysis and image matching of Sunkada can be the screenshot analysis and merchandise matching of Cheung. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to have modified the screenshot-based e-commerce system of Cheung to incorporate the teachings of product image matching using user profiles of Sunkada because both address the same field of image based product search systems, and by incorporating Sunkada into Cheung provide the screenshot-based e-commerce system with personalized product recommendations based on user profile data.
One of ordinary skill in the art would be motivated to do so as to provide a product marketing and/or shopping platform having improved and/or new tools for marketing products or services, which provide consumers with improved convenience and/or options for shopping for products or services, as taught by Sunkada [0017].
Independent claims 15 and 20 recite substantially the same limitations as independent claim 25, and are rejected for substantially the same reasons. Independent claims 39 and 42 further recite a system for providing image-based content recommendations, the system comprising: at least one processor; and at least one storage device storing instructions which, when executed by the at least one processor, cause the at least one processor to perform operations and a non-transitory computer-readable medium storing computer-executable instructions which, when executed by at least one processor, cause the at least one processor to perform operations for providing image-based content recommendations which is also taught by Cheung, [0005], which discloses “system comprising a processor, memory and program code which, when executed by the processor, configures the system”.
Regarding claim 2, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein receiving the one or more of the image of the product or the text associated with the product included within the image data associated with the client device comprises: (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise (i.e. product of interest), retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included.)
receiving the image data from the client device; (Cheung, Fig. 1 and [0034]-[0037], discloses receiving a screenshot captured on a computing device, such as camera software or a screen capture, or alternatively from a screenshot repository.) and
identifying the one or more of the image of the product or the text associated with the product included within the image data. (Cheung, Fig. 1 and [0034]-[0037], discloses receiving a screenshot captured on a computing device, such as camera software or a screen capture, or alternatively from a screenshot repository. Cheung, Figs. 1A-D and [0045]-[0058], discloses using image segmentation analysis to locate objects and boundaries in images to extract content of the segments as metadata, such as text, number, logos or images.)
Regarding claim 3, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein the client device is configured to identify the one or more of the image of the product or the text associated with the product included within the image data, and at least the one or more of the image of the product or the text associated with the product included within the image data is received from the client device. (Cheung, Fig. 1 and [0034]-[0037], discloses receiving a screenshot captured on a computing device, such as camera software or a screen capture, or alternatively from a screenshot repository. Cheung, Figs. 1A-D and [0045]-[0058], discloses using image segmentation analysis to locate objects and boundaries in images to extract content of the segments as metadata, such as text, number, logos or images. Cheung, [0062]-[0064], discloses metadata extracted from a screenshot are validated with information provided by a identified source and stored in a database with suitable labels or annotation, such as price, logo, URL (uniform resource locator), or name of a merchandise, and can be used to identify one or more merchandises shown in the screenshot. Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot and/or other factors, such as a user's location, shopping history in the past, or a particular vendor's sales or promotion.)
Regarding claim 4, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, further comprising: generating a user profile associated with the client device based on a plurality of image data, including the image data, associated with the client device, wherein the determining the plurality of content recommendations associated with the product is further based on the user profile. (Sunkada, [0017], discloses a consumer utilizing an access device having a camera to capture an image of a product (e.g., a photograph of a product) on the fly and the image of the product may be used to search a repository of product image data to identify one or more product images that match the image of the product. Sunkada, [0034], discloses a data storage facility that maintains product data, product image data, search results data, user profile data, and any other data. User profile data may include any data associated with profiles and/or preferences of users accessing system and may be utilized in conjunction with saving and maintaining records of search requests and/or search results as historical data associated with users. Examiner interprets that capturing an image of a product (e.g., a photograph of a product) on the fly and using the image to search a repository of product image data to identify one or more product images to be a search request using received images (e.g. screenshots) from a client computing device. Sunkada, [0038], discloses a product subsystem may maintain user profiles of users of access devices and utilize the user profiles to selectively provide product marketing, search, and/or shopping tools. Sunkada, [0074], discloses logging data associated with a transaction to be stored and used to ascertain consumption patterns for consumers, products, types of products, geographic regions, etc. System also logs and maintains historical data representative of product search requests initiated by consumers, search results data generated in response to the requests, and transaction histories of consumers.)
Claim 17 recites substantially the same limitations as claim 4, and is rejected for substantially the same reasons.
Regarding claim 5, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein the at least one content recommendation of the plurality of content recommendations included in the graphical user interface and displayed, at the display of the client device, includes a link to a resource, and the client device is caused to access the resource through the link in response to receiving a selection associated with the at least one content recommendation via the graphical user interface. (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot and/or other factors. Examiner interprets the user's ability to display information of the merchandise, retrieve more information of the merchandise, and/or make a purchase of the merchandise as items being selectable by the user.)
Regarding claim 6, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 5, wherein the selection is at least one of the one or more user interactions received as feedback. (Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot and/or other factors. Examiner interprets the user's ability to display information of the merchandise, retrieve more information of the merchandise, and/or make a purchase of the merchandise as items being selectable by the user. In combination, Sunkada, [0034] and [0038], discloses product subsystem with a data storage facility that maintains product data, product image data, search results data, user profile data, and any other data. The user profile data may include any data associated with profiles and/or preferences of users accessing system and historical data associated with users. Examiner interprets user profile data and historical data associated with users to be feedback. Sunkada, [0074], discloses logging data associated with a transaction (e.g. a purchase of a product) to be stored and used to ascertain consumption patterns (i.e. user behavior data) for consumers, products, types of products, geographic regions, etc. System also logs and maintains historical data representative of product search requests initiated by consumers, search results data generated in response to the requests, and transaction histories of consumers. Examiner interprets logging data associated with a transaction (e.g. a purchase of a product) to be user interactions with a content recommendation.)
Regarding claim 8, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein the determining, using the at least one machine learning model, the plurality of content recommendations associated with the product further comprises: (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0031], discloses finding a potential match for the candidate merchandise according to the identifying information. Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included by taking extracted features of the screenshot as input in a machine learning approach to be compared to screenshot images in a training dataset.)
determining, using the at least one machine learning model, the plurality of content recommendations based on a plurality of images of a plurality of products and the one or more of the image of the product or the text associated with the product, wherein the plurality of content recommendations are associated with a subset of the plurality of images each identified as including a matching product to the product. (Cheung, [0031], discloses finding a potential match for the candidate merchandise according to the identifying information. Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included by taking extracted features of the screenshot as input in a machine learning approach to be compared to screenshot images in a training dataset. In combination, Sunkada, [0017], discloses a consumer utilizing an access device having a camera to capture an image of a product (e.g., a photograph of a product) on the fly and the image of the product may be used to search a repository of product image data to identify one or more product images that match the image of the product. Sunkada, [0030]-[0031], discloses image matching facility configured to analyze image data and determine from the analysis whether an image matches another image in accordance with an image matching heuristic employing any suitable image matching technologies to analyze images and determine whether the images match. Examiner interprets that if images are determined to be a match to mean that identified product includes a matching product to the product of interest included in the screenshot.)
Claim 19 recites substantially the same limitations as claim 8, and is rejected for substantially the same reasons.
Regarding claim 9, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 8, wherein the plurality of content recommendations each include an image of the matching product, from the subset of the plurality of images, and a link to a resource associated with the matching product. (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot and/or other factors.)
Regarding claim 10, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 8, wherein the matching product is one of an identical product to the product or a similar product to the product. (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system [e.g. similar products] based on the merchandise identified from the screenshot and/or other factors.)
Regarding claim 11, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein a first subset of the plurality of content recommendations include content associated with an identical product to the product and a second subset of the plurality of content recommendations include content associated with one or more products similar to the product. (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise [i.e. first subset of the plurality of content recommendations include content associated with an identical product to the product] and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot [i.e. second subset of the plurality of content recommendations include content associated with one or more products similar to the product] and/or other factors.)
Regarding claim 12, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 11, wherein a first portion of the graphical user interface includes the first subset of the plurality of content recommendations, and a second portion of the graphical user interface includes the second subset of the plurality of content recommendations. (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise [i.e. a first portion of the graphical user interface] and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot [i.e. a second portion of the graphical user interface] and/or other factors.)
Regarding claim 13, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein the graphical user interface caused to be displayed, at the display of the client device, further includes the image data. (Sunkada, [0044], discloses the system may retrieve product information from the repository of product data and/or product image data maintained in data storage facility and provide the retrieved information to access device for presentation to consumer.)
Regarding claim 14, Cheung, in view of Sunkada, teaches the computer-implemented method of claim 1, wherein the image data includes one of an image or a screenshot. (Cheung, Fig. 1 and [0034]-[0037], discloses receiving a screenshot captured on a computing device, such as camera software or a screen capture, or alternatively from a screenshot repository.)
Regarding claim 16, Cheung, in view of Sunkada, teaches the system of claim 15, wherein one of the client device or the at least one processor identifies the one or more of the image of the product or the text associated with the product included within the image data. (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise (i.e. product of interest), retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included.)
Regarding claim 18, Cheung, in view of Sunkada, teaches the system of claim 15, wherein the at least one content recommendation of the plurality of content recommendations included in the graphical user interface and displayed, at the display of the client device, includes a link to a resource, the client device is caused to access the resource through the link in response to receiving a selection associated with the at least one content recommendation via the graphical user interface, (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0065]-[0069], when merchandise is identified from the screenshot, the system can use a user interface that has mechanisms such as a link, to allow a user to display information identifying the merchandise and retrieve more information of the merchandise, such as price, availability, color, size, and specification, of the merchandise and location and review of the vendor, or make a purchase of the merchandise. The interface also displays different merchandise recommended by the system based on the merchandise identified from the screenshot and/or other factors, such as a user's location, shopping history in the past, or a particular vendor's sales or promotion. Examiner interprets the user's ability to display information of the merchandise, retrieve more information of the merchandise, and/or make a purchase of the merchandise as the identified product being selectable by the user.) and the selection is at least one of the one or more user interactions received as feedback. (Sunkada, [0034] and [0038], discloses product subsystem with a data storage facility that maintains product data, product image data, search results data, user profile data, and any other data. The user profile data may include any data associated with profiles and/or preferences of users accessing system and historical data associated with users. Examiner interprets user profile data and historical data associated with users to be feedback. Sunkada, [0074], discloses logging data associated with a transaction (e.g. a purchase of a product) to be stored and used to ascertain consumption patterns (i.e. user behavior data) for consumers, products, types of products, geographic regions, etc. System also logs and maintains historical data representative of product search requests initiated by consumers, search results data generated in response to the requests, and transaction histories of consumers. Examiner interprets logging data associated with a transaction (e.g. a purchase of a product) to be user interactions with a content recommendation.)
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Cheung, in view of Sunkada, and further in view of HUANG et al. (U.S. Pub. No. 2010/0260426, cited in IDS), hereinafter Huang.
Regarding claim 7, Cheung, in view of Sunkada, teaches all the limitations as set forth in the rejection of claim 1 above. Cheung, in view of Sunkada, further teaches the computer-implemented method of claim 1, wherein the determining, using the at least one machine learning model, the plurality of content recommendations associated with the product further comprises: (Cheung, [0004], discloses "receiving a screenshot comprising identifying information of a merchandise, retrieving a candidate merchandise offered for sale by a vendor, finding a potential match for the candidate merchandise according to the identifying information, and displaying the candidate merchandise." Cheung, [0031], discloses finding a potential match for the candidate merchandise according to the identifying information. Cheung, [0042], discloses a screenshot can be checked to determine whether suitable identifying information of a merchandise is included by taking extracted features of the screenshot as input in a machine learning approach to be compared to screenshot images in a training dataset.)
However, Cheung, in view of Petrou, does not explicitly teach determining at least one product category for the product, wherein the plurality of content recommendations associated with the product are further determined based on the at least one product category.
On the other hand, Huang teaches determining at least one product category for the product, (Huang, [0025], discloses acquire and processing an image to initiate a visual search by detecting one or more objects based features of interest in the image, and compare the objects to trained images to categorize or recognize the objects. Huang, [0039], discloses image detection/ recognition include algorithms for detecting one or more categories of objects in an image and/or recognizing the objects in the image. Huang, [0050], [0058], [0061]-[0062], and [0068]-[0069], discloses an image recognition server to recognize or otherwise identify one or more objects within image based on image data, metadata, contextual data associated with image of the visual search query. The image recognition server can perform a one-to-one matching of the image with image data stored in an image data and coefficient library and detect at least one object of interest in the image to compute a feature vector that uniquely represents the object of interest for comparing to determine categories of detected objects.) wherein the plurality of content recommendations associated with the product are further determined based on the at least one product category. (Huang, [0009] and [0066], discloses recognizing an object in the query image based on the associated metadata to generate information content based on the recognized object and communicate the information content in response to the visual search query. The information content associated with the selected object in the acquired image can include product information (e.g., a product brand and a product type), related products [i.e. product recommendations], links to online retailers for comparison shopping, or to purchase instantly, etc.)
The image recognition of Huang can be the screenshot analysis and merchandise matching of Cheung. Also, the visual search result based on the recognized object of interest in response the visual search query using the recognized object, the metadata, and/or the contextual data associated with the image of Huang can be the identifying of candidate merchandise according to the identifying information of Cheung. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to have modified the screenshot-based e-commerce system of Cheung to incorporate the teachings of image recognition of Huang because both address the same field of image based product search systems, and by incorporating Huang into Cheung provide the screenshot-based e-commerce system with object/product category recognition.
One of ordinary skill in the art would be motivated to do so as to provide a way to extract and send a relevant portion of an acquired image instead of the entire acquired image, and thus enhancing the speed at which the visual search query is communicated and decreasing the communication bandwidth requirement, and focusing the scope of the visual search, thus improving the accuracy, speed, and efficiency of the image recognition system, as taught by Huang [0011].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDDY CHEUNG whose telephone number is (571)272-9785. The examiner can normally be reached MON-TH 8:00AM-4:00PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571)270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Eddy Cheung/Primary Examiner, Art Unit 2165