DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Status of Claims
Claims 1-20 have been cancelled.
Claims 21-40 have been added, and are rejected.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 21-40 are rejected under 35 U.S.C. 101 because the claims are directed to a judicial exception without significantly more.
Step 1:
Claims 21-35 are directed to a method, which is a process. Claims 36-40 are directed to a system, which is an apparatus. Therefore, claims 21-40 are directed to one of the four statutory categories of invention.
Step 2A (Prong 1):
Taking claim 36 as representative, claim 36 sets forth the following limitations of analyzing items in an image to recommend retail items:
receive a set if data from a user, associated with a first item, comprising receiving one or more images of at least a portion of the first item and text, in image format, describing the first item;
determine categorization metadata of the one or more images of at least the portion of the first item by analyzing the one or more images of at least the portion of the first item;
determine textual metadata of the first item by extracting information from the text received with the one or more images;
determine a feature associated with the first item using the textual metadata and the categorization metadata;
match a feature of a second item in a retailer related to the determined feature of the first item to create a recommendation of at least one retail item;
transmit the recommendation of the at least one retail item to the user.
The recited limitations above set forth the process for analyzing items in an image to recommend retail items. These limitations amount to certain methods of organizing human activity, including commercial or legal transactions (e.g. agreements in the form of contracts, advertising, marketing or sales activities or behaviors, etc.). The claims are directed to identifying items in an image to search retailer items to provide recommendations (see specification: [0002], which discloses the problem of suggested items not accounting for items the user already has, and does not provide inspiration, validation, and empowerment to the user), which is and sales and marketing activity.
Such concepts have been identified by the courts as abstract ideas (see: MPEP 2106.04(a)(2)).
Step 2A (Prong 2):
Examiner acknowledges that representative claim 36 recites additional elements, such as:
a server;
a user computing device;
a computer-readable data storage device storing program instructions;
computer intelligence;
retailer database;
Taken individually and as a whole, claim 36 does not integrate the recited judicial exception into a practical application of the exception. The additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use.
Furthermore, this is also because the claim fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement a judicial exception with a particular machine, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) apply the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
While the claims recite a user computing device, a server, a computer-readable data storage device, and database, these elements are recited with a very high level of generalization, merely reciting these elements as executing the steps of the claims and representing a user within a computing environment. Specification paragraph [0025] discloses that the computing device can comprise any general purpose computing article of manufacturer, such as any personal computer, server, etc. As such, it is evident that the computing components are any generic computing devices that are merely leveraged to implement the abstract idea within a computing environment. The computer intelligence is also recited in passing in the claim, merely applying the computer intelligence to automate the determining of data within an image. Specification paragraph only recites the computer intelligence once, merely disclosing that the metadata may be determined by analyzing the images using computer intelligence. As such, it is clear that the computer intelligence is any generic artificial intelligence that is merely applied to the abstract idea.
In view of the above, under Step 2A (Prong 2), claim 36 does not integrate the recited exception into a practical application (see: MPEP 2106.04(d)).
Step 2B:
Returning to claim 36, taken individually or as a whole, the additional elements of claim 36 do not provide an inventive concept (i.e. whether the additional elements amount to significantly more than the exception itself). As noted above, the additional elements recited in claim 36 are recited in a generic manner with a high level of generality and only serve to implement the abstract idea on a generic computing device. The claims result only in an improved abstract idea itself and do not reflect improvements to the functioning of a computer or another technology or technical field. As discussed above with respect to the integration of the abstract idea into a practical application, the additional elements used to perform the claimed process ultimately amount to no more than the mere instructions to apply the exception using a generic computer and/or no more than a general link to a technological environment.
Even when considered as an ordered combination, the additional elements of claim 36 do not add anything further than when they are considered individually.
In view of the above, claim 36 does not provide an inventive concept under step 2B, and is ineligible for patenting.
Dependent claims 22-35 and 37-40 recite further complexity to the judicial exception (abstract idea) of claim 36, such as by further defining the algorithm of analyzing items in an image to recommend retail items, and do not recite any further additional elements. Thus, each of claims 22-35 and 37-40 are held to recite a judicial exception under Step 2A (Prong 1) for at least similar reasons as discussed above.
Under prong 2 of step 2A, the additional elements of dependent claims 22-35 and 37-40 also do not integrate the abstract idea into a practical application, considered both individually or as a whole. More specifically, dependent claims 22-35 and 37-40 rely on at least similar elements as recited in claim 36. Further additional elements are also acknowledged (e.g., machine learning training (claim 23); computer vision (claim 24)); however, the additional elements of claims 22-35 and 37-40 are recited only at a high level of generality (i.e. as generic computing hardware) such that they amount to nothing more than the mere instructions to implement or apply the abstract idea on generic computing hardware (or, merely uses a computer as a tool to perform an abstract idea). Further, the additional elements do no more than generally link the use of a judicial exception to a particular technological environment or field of use (such as the Internet or computing networks).
Secondly, this is also because the claims fails to (i) reflect an improvement in the functioning of a computer, or an improvement to other technology or technical field, (ii) implement the judicial exception with, or use the judicial exception in conjunction with, a particular machine or manufacture that is integral to the claim, (iii) effect a transformation or reduction of a particular article to a different state or thing, or (iv) applies or uses the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment.
Taken individually and as a whole, dependent claims 22-35 and 37-40 do not integrate the recited judicial exception into a practical application of the exception under step 2A (prong 2).
Lastly, under step 2B, claims 22-35 and 37-40 also fail to result in “significantly more” than the abstract idea under step 2B. The dependent claims recite additional functions that describe the abstract idea and use the computing device to implement the abstract idea, while failing to provide an improvement to the functioning of a computer, another technology, or technical field. The dependent claims fail to confer eligibility under step 2B because the claims merely apply the exception on generic computing hardware and generally link the exception to a technological environment.
Even when viewed as an ordered combination (as a whole), the additional elements of the dependent claims do not add anything further than when they are considered individually.
Taken individually or as an ordered combination, the dependent claims simply convey the abstract idea itself applied on a generic computer and are held to be ineligible under Steps 2B for at least similar rationale as discussed above regarding claim 36. Thus, dependent claims 22-35 and 37-40 do not add “significantly more” to the abstract idea.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 21, 24-25, 34-36, and 39-40 are rejected under 35 U.S.C. 103 as being unpatentable by Gokturk (US 7,542,610 B2) in view of Dalal (US 20150170250 A1).
Regarding Claim 21: Gokturk discloses a method comprising:
determining, by the server, a feature associated with the first item using the textual metadata and the categorization metadata; (Gokturk: col. 11, ln. 40-50 – “Such features may be referred to as "visual features". Each of these features may be stored numerically as vectors and each item may be indexed. For such entries, the index 120 may include in part or whole a similarity database where the item's metadata is saved along with the visual features. In combination, one by one, or collectively, the various feature vectors for an object may comprise the signature value. The item's metadata is also saved as a metadata feature vector. In one embodiment, the metadata feature can be a mapping of the words to unique identifiers that are derived from a dictionary look-up”; Gokturk: col. 10, ln. 65–col. 11, ln. 6 – “pre-defined categories are identified, and based on information such as keywords describing the image, URL locating the image, or other information, a categorization of the object in the image is made. For instance, a website might have named the shoes as "men's footwear". A corresponding pre-defined category may be labeled "men's shoes". In one embodiment, a rule-based system can be used to map descriptive terms of an image to a predefined category”). In summary, various category and text information is used to determine the features of the object in the image.
matching, by the server, a feature of a second item in a retailer database related to the determined feature of the first item to create a recommendation of at least one retail item; (Gokturk: col. 12, ln. 64–col. 13, ln. 4 – “In step 540, a result comprising one or more images with objects deemed to be similar in appearance is returned to the user. In one embodiment, the result includes, at least initially, only a single image that contains an image deemed most similar to the selected object. In another embodiment, a series, sequence or other plurality of images may be displayed. The images may be sorted or ranked by various factors, including proximity of similarity or other factors”; Gokturk: col. 13, ln. 6-19 – “a user may view an auction or e-commerce page that shows an object for sale. The user may select a feature, or alternatively access a site, that accepts the image as input and processes the image to determine a signature value. One or more implementations also provide that text associated with the merchandise (e.g. auction heading) may also be used to specify a category that the signature value is to apply to. Then a search of the index 120 is performed to identify either exact matches (e.g. the same item on sale at another auction or site) or an item that is deemed similar to the selected merchandise. For example, the user may like the item being viewed, but may want to see what else that is similar in appearance is offered at a particular auction site or on other e-commerce sites”).
transmitting the recommendation of the at least one retail item to the user computing device. (Gokturk: col. 12, ln. 64-66 – “In step 540, a result comprising one or more images with objects deemed to be similar in appearance is returned to the use”).
Gokturk does not explicitly teach a method comprising:
receiving a set if data from a user computing device, by a server, associated with a first item, comprising receiving one or more images of at least a portion of the first item and text, in image format, describing the first item;
determining, by the server, categorization metadata of the one or more images of at least the portion of the first item by analyzing the one or more images of at least the portion of the first item using computer intelligence;
determining, by the server, textual metadata of the first item by extracting information from the text received with the one or more images;
Notably, however, Gokturk does disclose identifying a signature value of the item in the image (Gokturk: col. 12, ln. 38-42) and the metadata being a part of the signature value (Grokturk: col. 11, ln. 46-50). Gokturk also discloses determining category information and analyzing text (Gokturk: col. 11, ln. 2-10).
To that accord, Dalal does teach a method comprising:
receiving a set if data from a user computing device, by a server, associated with a first item, comprising receiving one or more images of at least a portion of the first item and text, in image format, describing the first item; (Dalal: [0044] – “the user may view images of clothing and apparel that include text and image content about the product”).
determining, by the server, categorization metadata of the one or more images of at least the portion of the first item by analyzing the one or more images of at least the portion of the first item using computer intelligence; (Dalal: [0044] – “the user product categorizations may be based on product characteristics (232), including visual product characteristics. In this way, the user's interest for clothing and apparel having, for example, a particular color, texture, pattern, shape or style, brand or price range may be identified”; Dalal: [0057] – “The web service may include intelligence for identifying what other types of clothing are required to form an ensemble or outfit from the item the user has selected”).
determining, by the server, textual metadata of the first item by extracting information from the text received with the one or more images; (Dalal: [0044] – “the user may view images of clothing and apparel that include text and image content about the product. The text and image content may be analyzed using image recognition or processing, as well as text analysis”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Gokturk disclosing the system for analyzing an image to display matching products to purchase with the receiving an image with the item and text and determining categorization and textual metadata as taught by Dalal. One of ordinary skill in the art would have been motivated to do so in order to determine recommendation parameters for the user based on items that are of interest to the user (Dalal: [0008]).
Regarding Claim 24: Golturk in view of Dalal discloses the limitations of claim 21 above.
Goktruk further discloses wherein the computer intelligence analyzing the one or more images comprises computer vision. (Gokturk: col. 6, ln. 1-15 – “Numerous techniques exist to determine objects in images, as well as to detect characteristics of determined objects, and obtaining signature values of objects in images. Some of these techniques are described in, for example, U.S. patent application Ser. No. 11/246,742, entitled SYSTEM AND METHOD FOR ENABLING THE USE OF CAPTURED IMAGES THROUGH RECOGNITION, filed on Oct. 7, 2005; which is hereby incorporated by reference in its entirety. Any of the priority documents may be used in their teachings for determining objects (including persons, apparel etc.) and obtaining signature values for such objects”).
Regarding Claim 35: Golturk in view of Dalal discloses the limitations of claim 21 above.
Goktruk further discloses wherein determining categorization metadata of the one or more images of at least a portion of the first item by analyzing the one or more images of at least the portion of the first item using computer intelligence comprises shape recognition of the one or more images of at least the portion of the first item. (Gokturk: col. 11, ln. 38-41 – “features that obtain the color, shape, boundary, and pattern of the foreground object are calculated”).
Regarding Claim 34: Golturk in view of Dalal discloses the limitations of claim 21 above.
Goktruk further discloses wherein the retail data comprises metadata extracted from images from one or more retailers. (Gokturk: col. 12, ln. 43-52 – “a similarity operation may be performed by the search component 130 on the index 120. The similarity operation may specify the merchandise object and the signature value, or alternatively the various feature vectors and other identifying information stored in the index 120. In addition, the similarity operation may identify objects that are in images recorded in the index, with signature values that are deemed to be similar to the selected object”; Gokturk: col. 11, ln. 45-48 – “the index 120 may include in part or whole a similarity database where the item's metadata is saved along with the visual features. In combination, one by one, or collectively, the various feature vectors for an object may comprise the signature value. The item's metadata is also saved as a metadata feature vector”).
Regarding Claim 35: Golturk in view of Dalal discloses the limitations of claim 21 above.
Goktruk does not explicitly teach wherein the categorization metadata of the first item comprises type. Notably, however, Gokturk does disclose determining the category of the item in the image (Gokturk: col. 11, ln. 2-10).
To that accord, Dalal does teach wherein the categorization metadata of the first item comprises type. (Dalal: [0051] – “image recognition or analysis may be used in combination with text and made a data analysis in order to determine characteristics of clothing or apparel displayed in individual catalog records (or other content items). The characteristics that can be determined from such analysis include, for example, a clothing/apparel type”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Gokturk disclosing the system for analyzing an image to display matching products to purchase with the receiving an image with the categorization data including the type as taught by Dalal. One of ordinary skill in the art would have been motivated to do so in order to determine characteristics from which user categorizations may take place (Gokturk: [0051]).
Regarding Claim 36: Claim 36 recites substantially similar limitations as claim 21. Therefore, claim 36 is rejected under the same rationale as claim 21 above.
Regarding Claim 39: Claim 39 recites substantially similar limitations as claim 24. Therefore, claim 39 is rejected under the same rationale as claim 24 above.
Regarding Claim 40: Claim 40 recites substantially similar limitations as claim 25. Therefore, claim 40 is rejected under the same rationale as claim 25 above.
Claims 22 and 37 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Gokturk (US 7,542,610 B2) and Dalal (US 20150170250 A1), in view of Jing (US 20070174872 A1).
Regarding Claim 22: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein matching the feature of the second item in the retailer data related to the determine feature of the first item comprises matching at least a portion of the categorization metadata of the first item with at least a portion of the categorization metadata of the second item stored in the retailer data. Notably, however, Gokturk does disclose matching or similar items from merchants (Gokturk: col. 13, ln. 6-19).
To that accord, Jing does teach wherein matching the feature of the second item in the retailer data related to the determine feature of the first item comprises matching at least a portion of the categorization metadata of the first item with at least a portion of the categorization metadata of the second item stored in the retailer data. (Jing: [0020] – “The content system may use this metadata when searching for images that match a query. For example, the content system may search the title, category, and description metadata when determining whether an image matches a query”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the matching of categorization metadata of the first item and the second item stored in retailer data as taught by Jing. One of ordinary skill in the art would have been motivated to do so in order to calculate relevance based on similarity of the query (Jing: [0001]).
Regarding Claim 37: Claim 36 recites substantially similar limitations as claim 22. Therefore, claim 37 is rejected under the same rationale as claim 22 above.
Claims 23 and 38 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Gokturk (US 7,542,610 B2) and Dalal (US 20150170250 A1), in view of Zomet (US 20140152847 A1).
Regarding Claim 23: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach machine learning training by using reference images having known features from one or more retailers and one or more users. Notably, however, Gokturk does disclose training a learning algorithm, such as a category mapping algorithm (Gokturk: col. 11, ln. 11-17).
To that accord, Zomet does teach machine learning training by using reference images having known features from one or more retailers and one or more users. (Zomet: [0023] – “Training data may be provided within the image-product database 180 by applying the algorithms to images of known objects”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the training using reference images with known features as taught by Zomet. One of ordinary skill in the art would have been motivated to do so in order to compare against algorithms for detecting features (Zomet: [0023]).
Regarding Claim 38: Claim 38 recites substantially similar limitations as claim 23. Therefore, claim 38 is rejected under the same rationale as claim 23 above.
Claims 26 and 28-29 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Gokturk (US 7,542,610 B2) and Dalal (US 20150170250 A1), in view of Ghanem (US 20110142335 A1).
Regarding Claim 26: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein analyzing the one or more images further comprises normalizing the image. Notably, however, Gokturk does disclose segmenting the image to separate the object from the background/foreground (Gokturk: col. 11, ln. 19-26).
To that accord, Ghanem does teach wherein analyzing the one or more images further comprises normalizing the image. (Ghanem: [0079] – “the image processing unit 125 determines a normalization length of the electronic image. The process of determining a normalization length of an image is illustrated in the flow chart of FIG. 14. In this embodiment, the image processing unit 125 normalizes the dimensions of the sampled pattern area”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the normalizing of the image as taught by Ghanem. One of ordinary skill in the art would have been motivated to do so in order to compare spatial features in the same position (Ghanem: [0056]).
Regarding Claim 28: The combination of Gokturk and Dalal, in view of Ghanem, discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein normalizing the one or more images comprises scaling the image to a standard size. Notably, however, Gokturk does disclose segmenting the image to separate the object from the background/foreground (Gokturk: col. 11, ln. 19-26).
To that accord, Ghanem does teach wherein normalizing the one or more images comprises scaling the image to a standard size. (Ghanem: [0079] – “The process of determining a normalization length of an image is illustrated in the flow chart of FIG. 14. In this embodiment, the image processing unit 125 normalizes the dimensions of the sampled pattern area so that the image captured in the object pattern is set to the same scale as the images of the apparel patterns stored in the processor memory unit 130”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the normalizing of the image by scaling to a standard size as taught by Ghanem. One of ordinary skill in the art would have been motivated to do so in order to compare spatial features in the same position (Ghanem: [0056]).
Regarding Claim 29: The combination of Gokturk and Dalal, in view of Ghanem, discloses the limitations of claim 21 above.
Gokturk further discloses wherein normalizing the one or more images comprises removing background from the image. (Gokturk: col. 11, ln. 21-32 – “The objective of the segmentation process is to separate the object(s) of interest from the background. For this, any foreground/background segmentation algorithm can be used. In one embodiment, the background can be assumed to be at the sides of the images, whereas the foreground can be assumed to be at the center. The intensity distribution of both foreground and background can be obtained from the center and side pixels respectively. As an example, a mixture of Gaussian models can be learnt for the foreground and background pixels. As a last step, these models can be applied to the whole image and each pixel can be classified as foreground and background”).
Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable by the combination of Gokturk (US 7,542,610 B2), Dalal (US 20150170250 A1), and Ghanem (US 20110142335 A1), in view of Dorner (US 9,401,032 B1).
Regarding Claim 27: The combination of Gokturk, Dalal, and Ghanem discloses the limitations of claim 26 above.
The combination does not explicitly teach wherein normalizing the one or more images comprises cropping the image. Notably, however, Gokturk does disclose segmenting the image to separate the object from the background/foreground (Gokturk: col. 11, ln. 19-26).
To that accord, Dorner does disclose wherein normalizing the one or more images comprises cropping the image. (Dorner: col. 7, ln. 19-20 – “one or more areas of interest can be cropped or extracted so that only these areas form the basis for palette generation”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk, Dalal, and Ghanem disclosing the system for analyzing an image to display matching products to purchase with the cropping of the images as taught by Dorner. One of ordinary skill in the art would have been motivated to do so in order to only use the areas of interest for generating data (Dorner: col. 7, ln. 20).
Claims 30-32 are rejected under 35 U.S.C. 103 as being unpatentable by the combination of Gokturk (US 7,542,610 B2) and Dalal (US 20150170250 A1), in view of Zhang (US 20160225023 A1).
Regarding Claim 30: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein the text can be obtained by character recognition. Notably, however, Gokturk does disclose mapping descriptive terms of an image (Gokturk: col. 11, ln. 5-10).
To that accord, Zhang does teach wherein the text can be obtained by character recognition. (Zhang: [0043] – “Optical character recognition (OCR) technology is an known approach to convert scanned images into machine-encoded text”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the use of character recognition as taught by Zhang. One of ordinary skill in the art would have been motivated to do so in order to obtain text information from non-text sources (Zhang: [0043]).
Regarding Claim 31: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein the text can be recognized by performing a bag-of-words search for predetermined words or phrases. Notably, however, Gokturk does disclose mapping descriptive terms of an image (Gokturk: col. 11, ln. 5-10).
To that accord, Zhang does teach wherein the text can be recognized by performing a bag-of-words search for predetermined words or phrases. (Zhang: [0044] – “From these text information for a certain video advertisement, keywords and their frequencies are retrieved to form a “bag of words” about the video clips. Here nouns are preferable keywords since nouns are believed to contain the most information of a video clip, as mentioned above”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the use of bag-of-words as taught by Zhang. One of ordinary skill in the art would have been motivated to do so in order to determine the most information (Zhang: [0044]).
Regarding Claim 32: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein the text can be recognized by using natural language processing. Notably, however, Gokturk does disclose mapping descriptive terms of an image (Gokturk: col. 11, ln. 5-10).
To that accord, Zhang does teach wherein the text can be recognized by using natural language processing. (Zhang: [0030] – “each word in a sentence can be tagged with word level tags used in natural language processing.”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the use of natural language processing as taught by Zhang. One of ordinary skill in the art would have been motivated to do so in order to extract keywords from the text (Zhang: [0029]).
Claim 33 is rejected under 35 U.S.C. 103 as being unpatentable by the combination of Gokturk (US 7,542,610 B2) and Dalal (US 20150170250 A1), in view of Motoyama (US 20140337345 A1).
Regarding Claim 33: The combination of Gokturk and Dalal discloses the limitations of claim 21 above.
The combination does not explicitly teach wherein the computer intelligence determines a confidence value that represents a probability that the determined feature is correct. Notably, however, Gokturk does disclose training a learning algorithm, such as a category mapping algorithm (Gokturk: col. 11, ln. 11-17).
To that accord, Motoyama does teach wherein the computer intelligence determines a confidence value that represents a probability that the determined feature is correct. (Motoyama: [0041] – “image characteristics are extracted from the preprocessed image, and a match is sought between the extracted image characteristics and characteristics of known categories. The matched categories are associated with the image. The matched categories may be associated with the image according to confidence levels”).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of the combination of Gokturk and Dalal disclosing the system for analyzing an image to display matching products to purchase with the confidence value representing a probability the feature is correct as taught by Motoyama. One of ordinary skill in the art would have been motivated to do so in order to determine correct and incorrectly identified categories (Motoyama: [0045]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Chen (US 20140279246 A1) discloses a system for processing input of a first product to identify a second product, such as by processing an image to determine attributes of an item to determine a similar products.
PTO-892 Reference U discloses an image search engine that integrates both textual and visual features to weight visual features, and provide results that reflect the users’ perception.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY J KANG whose telephone number is (571)272-8069. The examiner can normally be reached Monday - Friday: 7:30 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Maria-Teresa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/T.J.K./Examiner, Art Unit 3689
/VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 1/29/2026