DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 6/25/2025 have been fully considered
35 USC § 101: These issues have been resolved and the rejection has been withdrawn in light of the amendments and arguments.
35 USC § 103:
Regarding Applicant’s Argument (page: 10): Examiner’s response:- It is important to note that this rejection is one of obviousness and not one of anticipation, hence elements from one art can be combined into a foundation of another separate art and there can be obviousness conclusions reached in the mapping and teaching of the prior arts inventions into the instant applications claim limitations. One cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). The new portions cited in the independent claim are taught by primary prior Sal (see art rejection below for further details) and the new claim limitations in claim 43 are taught by Babenko (see art rejection below for further details), henceforth the new limitations are rendered obvious over Sal in view of Babenko. The examiner recommends further elaborating on "an object-recognition model" in the independent claims. The examiner believes amendments directed towards parameters/factors involved in and utilized by the object-recognition model and corresponding "a recommended manipulation" will help push over the current prior art and push the application towards allowance. If the applicant would like further guidance for overcoming the prior art(s), please call the examiner at 571-272-5212.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 21-25,27-29,31-34 and 37-43 are rejected under 35 U.S.C. 103 as being unpatentable over US 20200142978 A1; Salokhe; Abhimanyu et al. (hereinafter Sal) in view of US 20180005040 A1; Babenko; Boris et al. (hereinafter Babenko).
Regarding claim 21, Sal teaches A method for processing information about an object requiring recognition, comprising: receiving object image data of an object, the received object image data being extracted from captured image data from a scene; (Sal [0011] a query image is received that includes a representation of at least one item of interest. The query image can be analyzed using a localizer to determine regions that correspond to potential items, such as may be based upon unique features in the image that correspond to various item patterns or representative vectors... [0024] one query image, or other set of image data, to use to locate relevant content. In response to receiving such an image, the query image can be passed to a style manager 216, or other such system or service, that may be part of the environment or offered as a third party service, among other such options... [0045] one or more cameras that are able to capture images of the surrounding environment and that are able to image a user, people, or objects in the vicinity of the device. The image capture element can include any appropriate technology, such as a CCD image capture element having a sufficient resolution, focal range, and viewable area to capture an image of the user when the user is operating the device. Methods for capturing images using... [FIG. 4] shows overall flow chart of system at a high level) determining at least one attribute associated with the object image data; (Sal [0011] a machine learning model or other statistical prediction algorithm trained on data for that category can be used to process the image data for that region. The trained model can accept the image data for the region as input, and output a set of attributes and values ... [0017] the attributes and values are determined, a data store including attribute and value information for various items can be analyzed to locate items having similar attributes and values [0027] The individual attributes and values can be determined at least in part using neural networks 222 such as a conventional neural network (CNN) or generative adversarial network (GAN) to determine the appropriate attributes and values through training on an appropriate data set [0025] elaborates on the matter [FIG. 4] shows overall flow chart of system at a high level) using an object-recognition model, including one or more convolutional neural networks, that is trained based on collected images of captured objects (Sal [0011] These regions of image data can be analyzed using a classifier to attempt to determine a classification, type, or category of item represented in that region. Once a category has been determined, a machine learning model or other statistical prediction algorithm trained on data for that category can be used to process the image data for that region. The trained model can accept the image data for the region as input, and output a set of attributes and values. [0024] Various object recognition algorithms and processes can be used, which can generate bounding boxes, coordinate sets, or other values or mechanisms that can be used to identify regions of the image that may correspond to objects of specific types, as may include feature points that match various patterns or relationships. The coordinate or bounding box information in this example can be provided, with at least the relevant image data (or just the image data for the relevant portions) to a classifier 228 or other such system or service, that is able to analyze the image data for a specific region and classify the type of item represented in the image region, at least for known classifications of items. In various embodiments the classifier and localizer might be part of the same component, process, or service. The classifier can analyze the image data and recognize a class or type of item in each region, at least where such a classification can be determined with at least a minimum level of confidence or certainty. In some embodiments there may be multiple classifications in a given region corresponding to different types of items [0025] The classification data can be provided, with the region data, to an image analyzer 226. The image analyzer may include the localizer and/or classifier in various embodiments. The image analyzer can analyze the regions identified by the localizer and for which classifications were determined by the classifier. The image analyzer 226 can use the classification for a given region to determine a relevant algorithm or model to use to process the image data for that region. This can include, for example, using a trained neural network or other statistical model that has been trained using image data for items of that classification, and is able to identify specific attributes associated with that classification. For example, a neural network trained using labeled test data for dress images can identify attributes such as length, color, pattern, and shape, among others, that are associated with different dresses. [0028] In the example shown in FIG. 2, a neural network 222 can be trained using, for example, images of objects. For CNN-based approaches there can be images submitted that are classified by attribute type, while for GAN-based approaches a series of images may be submitted for training that may include metadata or other information useful in classifying one or more aspects of each image. For example, a CNN may be trained to perform object recognition using images of different types of objects, then learn how the attributes relate to those objects using the provided training data...) wherein the at least one attribute comprises at least one of a make, a model, a trim, or an engine type of the object (Sal [0011] These regions of image data can be analyzed using a classifier to attempt to determine a classification, type, or category of item represented in that region. Once a category has been determined, a machine learning model or other statistical prediction algorithm trained on data for that category can be used to process the image data for that region. The trained model can accept the image data for the region as input, and output a set of attributes and values. The attributes can be visual or stylistic attributes that were determined to be exhibited by the representation of the item in the image region. The values can be confidence or certainty values for those elements, or can represent a prominence of the attributes in the image, among other such options. The set of attributes can be used to generate an attribute or feature vector in some embodiments, while the set of attributes and values can be used in others. These values or vectors can be compared against a set of similar types [0016] a classifier to determine a type of item represented in each image. For each class of item, a trained machine learning model, or other such process or algorithm, can be used to analyze the corresponding image region to identify visual attributes of the item. These can include attributes relating to length, color, pattern, cut, width, shape, hemline, neckline, silhouette, occasion type, and the like. The model...[21] type of item where there may be many options available, and a user can specify a specific style or set of attributes that are of interest. This may be particularly relevant for items such as clothing or furniture, where there may be many variations of a type of item available with varying sets of attributes, as well as weights or amounts of that attribute. For example...[23-28 and 32-35] elaborate on the at least one attribute associated with the image data comprises at least one of a make, a model, a trim, or an engine type of the object) accessing a plurality of data records, comprising reference object image data of stored objects; selecting a data record from among the plurality of data records; (Sal [FIG.2] accessing a plurality of data records, the data records comprising reference object image data of stored objects and selecting a data record from among the plurality of data records [0015] use these features to locate matching images. [0018] , a graph database is used to build a knowledge graph consisting of item identifiers, attributes (including both item and visually detected attributes, for example), and their relationships to each other. Such data provides for the dynamic determination of items having a given combination of attributes with similar confidence values or other such metrics. [0017] The number of items for which content is returned can vary, such as by the number of items having at least a minimum similarity score or satisfying a similarity score threshold, among other such options...[0026] Information such as the classification, set of attributes, and associated values can then be compared against data in a style repository, for example, that includes style data previously determined or various items. The style manager can then use a similarity determination algorithm or process to compare the data for the query image regions against data for items stored in the style repository. The process can produce a set of results indicating information for similar items, which were determined to have at least a minimum similarity score or value with respect to the query image. In some embodiments a ranking will be produced according to similarity score, and at least a top subset of the item data selected...[FIG. 4] shows overall flow chart of system at a high level) calculating, without communicating with a remote computer system, a match score based on the selected data record and corresponding to the at least one attribute determined using the object recognition model; (Sal [0011]a machine learning model or other statistical prediction algorithm... A similarity determination algorithm can be used to identify similar items to the item represented in a specific region of the query image, and in some embodiments the items can be ranked by similarity scores. [0016] a trained machine learning model, or other such process or algorithm, can be used to analyze the corresponding image region to identify visual attributes of the item. These can include attributes relating to length, color, pattern, cut, width, shape, hemline, neckline, silhouette, occasion type, and the like. The model can also produce confidence values or attribute scores [0017] generates similarity scores for various items based on the attribute and confidence values, and then ranks the items by similarity scores. Content for at least a subset of the similar items can then be provided for presentation to the user, such as up to a determined number of highest ranked items. In this example, content 154 for three items is provided, where each of those items has a very similar style to the item in the query image based at least on the determined attributes and values.[0024] Various object recognition algorithms and processes can be used, which can generate bounding boxes, coordinate sets, or other values or mechanisms that can be used to identify regions of the image that may correspond to objects of specific types, as may include feature points that match various patterns or relationships. The coordinate or bounding box information in this example can be provided, with at least the relevant image data (or just the image data for the relevant portions) to a classifier 228 or other such system or service, that is able to analyze the image data for a specific region and classify the type of item represented in the image region, at least for known classifications of items. In various embodiments the classifier and localizer might be part of the same component, process, or service [0026] Information such as the classification, set of attributes, and associated values can then be compared against data in a style repository, for example, that includes style data previously determined or various items. The style manager can then use a similarity determination algorithm or process to compare the data for the query image regions against data for items stored in the style repository. The process can produce a set of results indicating information for similar items, which were determined to have at least a minimum similarity score or value with respect to the query image. In some embodiments a ranking will be produced according to similarity score, and at least a top subset of the item data selected...[0027] The individual attributes and values can be determined at least in part using neural networks 222 such as a conventional neural network (CNN) or generative adversarial network (GAN)[FIG. 4] shows overall flow chart of system at a high level) and causing the selected data record to be presented based on whether the calculated match score at or above a threshold value. (Sal [0017] generates similarity scores for various items based on the attribute and confidence values, and then ranks the items by similarity scores. Content for at least a subset of the similar items can then be provided for presentation to the user, such as up to a determined number of highest ranked items. In this example, content 154 for three items is provided, where each of those items has a very similar style to the item in the query image based at least on the determined attributes and values. [0026] The process can produce a set of results indicating information for similar items, which were determined to have at least a minimum similarity score or value with respect to the query image. In some embodiments a ranking will be produced according to similarity score, and at least a top subset of the item data selected. The data for the items determined to be similar, such as may include a set of item identifiers, can be provided to the content server 210, which can then pull the relevant content from the content data store 212 to return to the client device 202. This may include, for example, image and description data for items determined to be similar according to the style attributes and associated values. [24 & 35] elaborate on the matter [FIG. 4] shows overall flow chart of system at a high level) Sal lacks explicitly and orderly teaching methods and corresponding processes based on records, determining, based on the calculated match score, whether the captured image data needs to be recaptured and providing a recommended manipulation based on the at least one attribute; however Babenko teaches methods and corresponding processes based on records (Babenko [0016] Certain embodiments of the present invention relate to systems and methods for classifying and scoring images stored in an online content management service. Although embodiments of the present invention are generally described with reference to images including but not limited to digital photographs, 3D images; virtual and/or augmented reality digital photographs and/or scenes; computer generated images; and other image files (records), embodiments of the present invention can be similarly applied to other content items (including but not limited to text documents; email messages; text messages; other types of messages; media files such as photos, videos, and audio files; and/or folders containing multiple files) stored in an online content management service. [0026] image database 204 can include images receives from many different users, using different client devices. In some embodiments, image database 204 can represent an interface to a distributed database system that includes multiple storage nodes (e.g., computer systems including processors, memory, and disk space configured to provide storage services to online content management service 200). User content items (including images) can be stored on dedicated storage nodes within image database 204 or can be stored together with other users' content items on the same storage nodes. [0028] image files (records) received from client 202 and/or stored in image database 204.) determining, based on the calculated match score, whether the captured image data needs to be recaptured (Babenko [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of a given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image. [0052] based on the composite score, a recommendation message can be sent to the client device. For example, if the composite score is below a threshold, the recommendation message can be to try capturing the image again) and providing a recommended manipulation based on the at least one attribute; (Bakenki [0029] Based on the score, an action engine 212 can generate an action recommendation message to display on client 202 with the score. For example, if the score is low because the image is of poor quality the action recommendation message can include a recommendation to the user to retake the digital photograph [0040] Based on the scores from each module in image analyzer 304, image score calculator 320 can calculate a composite score, recommend actions to the user based on the composite score, and update scoring modules based on feedback received from the user related to the composite score. [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of a given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image. [51-52] further elaborates) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Bakenko in order to add a way to further refine the output and ultimately create a more accurate output (Babenko [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of an given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image. [0052] based on the composite score, a recommendation message can be sent to the client device. For example, if the composite score is below a threshold, the recommendation message can be to try capturing the image again [0053] image scores can be used to improve user experience in areas other than displaying selected images.)
Corresponding product claim 37 is rejected similarly as claim 21 above. Additional Limitations: computer readable medium capable of reading and executing instructions (Sal [FIG.7] shows the corresponding software executing components )
Corresponding system claim 40 is rejected similarly as claim 21 above. Additional Limitations: Device with processor(s) and memory (Sal [FIG.7] shows the corresponding hardware components )
Regarding claim 22, Sal and Babenko teach The method of claim 21, wherein the data record comprises multiple images of the object. (Babenko [0016] Certain embodiments of the present invention relate to systems and methods for classifying and scoring images stored in an online content management service. Although embodiments of the present invention are generally described with reference to images including but not limited to digital photographs, 3D images; virtual and/or augmented reality digital photographs and/or scenes; computer generated images; and other image files (records), embodiments of the present invention can be similarly applied to other content items (including but not limited to text documents; email messages; text messages; other types of messages; media files such as photos, videos, and audio files; and/or folders containing multiple files) stored in an online content management service. [0026] image database 204 can include images receives from many different users, using different client devices. In some embodiments, image database 204 can represent an interface to a distributed database system that includes multiple storage nodes (e.g., computer systems including processors, memory, and disk space configured to provide storage services to online content management service 200). User content items (including images) can be stored on dedicated storage nodes within image database 204 or can be stored together with other users' content items on the same storage nodes. [0028] image files (records) received from client 202 and/or stored in image database 204.)
Corresponding product claim 38 is rejected similarly as claim 22 above.
Regarding claim 23, Sal and Babenko teach The method of claim 21, wherein the selected data record is a most probable data record corresponding to the object image data. (Sal [0017] generates similarity scores for various items based on the attribute and confidence values, and then ranks the items by similarity scores. Content for at least a subset of the similar items can then be provided for presentation to the user, such as up to a determined number of highest ranked items. In this example, content 154 for three items is provided, where each of those items has a very similar style to the item in the query image based at least on the determined attributes and values. [0026] The process can produce a set of results indicating information for similar items, which were determined to have at least a minimum similarity score or value with respect to the query image. In some embodiments a ranking will be produced according to similarity score, and at least a top subset of the item data selected. The data for the items determined to be similar, such as may include a set of item identifiers, can be provided to the content server 210, which can then pull the relevant content from the content data store 212 to return to the client device 202. This may include, for example, image and description data for items determined to be similar according to the style attributes and associated values. [24 & 35] elaborate on the matter [FIG. 4] shows overall flow chart of system at a high level)
Corresponding product claim 39 is rejected similarly as claim 23 above.
Regarding claim 24, Sal and Babenko teach The method of claim 23, wherein the calculated match score comprises a captured object that is assigned a match score indicating a successful recognition of at least one attribute of the object. (Sal [0017] generates similarity scores for various items based on the attribute and confidence values, and then ranks the items by similarity scores. Content for at least a subset of the similar items can then be provided for presentation to the user, such as up to a determined number of highest ranked items. In this example, content 154 for three items is provided, where each of those items has a very similar style to the item in the query image based at least on the determined attributes and values. [0026] The process can produce a set of results indicating information for similar items, which were determined to have at least a minimum similarity score or value with respect to the query image. In some embodiments a ranking will be produced according to similarity score, and at least a top subset of the item data selected. The data for the items determined to be similar, such as may include a set of item identifiers, can be provided to the content server 210, which can then pull the relevant content from the content data store 212 to return to the client device 202. This may include, for example, image and description data for items determined to be similar according to the style attributes and associated values. [24 & 35] elaborate on the matter [FIG. 4] shows overall flow chart of system at a high level)
Regarding claim 25, Sal and Babenko teach The method of claim 21, wherein the calculated match score includes a probability value or a function related to a probability value. (Sal [0017] A similarity function can be used in some embodiments that generates similarity scores for various items based on the attribute and confidence values, and then ranks the items by similarity scores. Content for at least a subset of the similar items can then be provided for presentation to the user, such as up to a determined number of highest ranked items. In this example, content 154 for three items is provided, where each of those items has a very similar style to the item in the query image based at least on the determined attributes and values. This can be very useful for a user who views an item of interest, and wants to obtain information about that item or items having a similar style or visual appearance. The number of items for which content is returned can vary, such as by the number of items having at least a minimum similarity score or satisfying a similarity score threshold, among other such options.[0026] The process can produce a set of results indicating information for similar items, which were determined to have at least a minimum similarity score or value with respect to the query image. In some embodiments a ranking will be produced according to similarity score, and at least a top subset of the item data selected. The data for the items determined to be similar, such as may include a set of item identifiers, can be provided to the content server 210, which can then pull the relevant content from the content data store 212 to return to the client device 202. This may include, for example, image and description data for items determined to be similar according to the style attributes and associated values. [24 & 35] elaborate on the matter [FIG. 4] shows overall flow chart of system at a high level)
Regarding claim 27, Sal and Babenko teach The method of claim 21, further comprising recapturing the object. (Babenko [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of a given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image. [0052] based on the composite score, a recommendation message can be sent to the client device. For example, if the composite score is below a threshold, the recommendation message can be to try capturing the image again)
Regarding claim 28, Sal and Babenko teach The method of claim 21, wherein determining whether the object needs to be recaptured comprises determining whether the match score is below a threshold. (Babenko [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of a given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image. [0052] based on the composite score, a recommendation message can be sent to the client device. For example, if the composite score is below a threshold, the recommendation message can be to try capturing the image again)
Regarding claim 29, Sal and Babenko teach The method of claim 28, further comprising, when the match score is below the threshold, indicating changes to an image capturing process to improve quality of the received object image data to increase the calculated match score. (Babenko [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of a given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image. [0052] based on the composite score, a recommendation message can be sent to the client device. For example, if the composite score is below a threshold, the recommendation message can be to try capturing the image again)
Regarding claim 31, Sal and Babenko teach The method of claim 21, wherein the match score comprises a set of match score numbers for an object, the match score numbers corresponding to respective attributes of the object. (Sal [0011] The values can be confidence or certainty values for those elements, or can represent a prominence of the attributes in the image, among other such options. The set of attributes can be used to generate an attribute or feature vector in some embodiments, while the set of attributes and values can be used in others. These values or vectors can be compared against a set of similar types of values or vectors that were generated for other items, which are indicative of attributes and values for those items. A similarity determination algorithm can be used to identify similar items to the item represented in a specific region of the query image, and in some embodiments the items can be ranked by similarity scores. Content for at least a subset of the most similar items can then be returned as results for the query image, which can enable additional information to be obtained about those items or enable those items to be obtained by a customer, etc. [0016] The model can also produce confidence values or attribute scores for each region, such as where an item might be 50% blue and 50% white, or where the item might be determined to have a length of halfway between the knee and the ankle with a confidence of 85%, among other such options. Once the set of attributes and corresponding confidence values is determined, the attributes and values can be used to locate items having similar attributes and values, as may have been determined manually or through use of the same trained model, among other such options. When a set of items having similar attributes and values is determined [17-25] elaborate on the matter )
Regarding claim 32, Sal and Babenko teach The method of claim 21, wherein the calculated match score corresponds to a single number for the match score calculated based on a weighted average of match score numbers. (Sal [0031] In some embodiments, a user can have the option of specifying images that include attributes or values that are of interest to the user. For example, the user might have the option of selecting a “thumb's up” 348 or other graphical element associated with an image in order to indicate that the user likes that item. Attributes and values associated with that item can then be weighted more highly than other attributes and/or values. If a set of images are selected, the aggregate attributes and values can be analyzed to determine the attributes and values that are of most interest to the user, which can be used to adjust the rankings or scores for the various items accordingly. In some embodiments there may also be thumb's down or other rating icons or elements that can enable various attributes to be adjusted or weighted accordingly.[0035] For certain types of items, there may be prioritizations of attributes overall or for a specific user, etc. Thus, when determining similar items, the items can be ranked not only according to the attributes and values, but also taking into account the prioritizations and weightings, such that similarity of color might be weighted more heavily than length or shape for certain types of items, among many other such options. The number of attributes or values considered can also be reduced if not enough results can be obtained, or the thresholds or tolerances can be reduced, etc. Other metrics can be used to adjust the rankings as well, as may include customer rating, sales velocity, price, popularity, and the like [19-21] elaborate on the matter)
Regarding claim 33, Sal and Babenko teach The method of claim 32, wherein different weights are associated with different match score numbers. (Sal [0031] In some embodiments, a user can have the option of specifying images that include attributes or values that are of interest to the user. For example, the user might have the option of selecting a “thumb's up” 348 or other graphical element associated with an image in order to indicate that the user likes that item. Attributes and values associated with that item can then be weighted more highly than other attributes and/or values. If a set of images are selected, the aggregate attributes and values can be analyzed to determine the attributes and values that are of most interest to the user, which can be used to adjust the rankings or scores for the various items accordingly. In some embodiments there may also be thumb's down or other rating icons or elements that can enable various attributes to be adjusted or weighted accordingly.[0035] For certain types of items, there may be prioritizations of attributes overall or for a specific user, etc. Thus, when determining similar items, the items can be ranked not only according to the attributes and values, but also taking into account the prioritizations and weightings, such that similarity of color might be weighted more heavily than length or shape for certain types of items, among many other such options. The number of attributes or values considered can also be reduced if not enough results can be obtained, or the thresholds or tolerances can be reduced, etc. Other metrics can be used to adjust the rankings as well, as may include customer rating, sales velocity, price, popularity, and the like [19-21] elaborate on the matter)
Regarding claim 34, Sal and Babenko teach The method of claim 21, wherein the calculated match score is based on first match score numbers corresponding to key attributes of the object. (Sal [0011] The values can be confidence or certainty values for those elements, or can represent a prominence of the attributes in the image, among other such options. The set of attributes can be used to generate an attribute or feature vector in some embodiments, while the set of attributes and values can be used in others. These values or vectors can be compared against a set of similar types of values or vectors that were generated for other items, which are indicative of attributes and values for those items. A similarity determination algorithm can be used to identify similar items to the item represented in a specific region of the query image, and in some embodiments the items can be ranked by similarity scores. Content for at least a subset of the most similar items can then be returned as results for the query image, which can enable additional information to be obtained about those items or enable those items to be obtained by a customer, etc. [0016] The model can also produce confidence values or attribute scores for each region, such as where an item might be 50% blue and 50% white, or where the item might be determined to have a length of halfway between the knee and the ankle with a confidence of 85%, among other such options. Once the set of attributes and corresponding confidence values is determined, the attributes and values can be used to locate items having similar attributes and values, as may have been determined manually or through use of the same trained model, among other such options. When a set of items having similar attributes and values is determined [17-25] elaborate on the matter )
Regarding claim 41, Sal and Babenko teach The method of claim 33, wherein the different weights are further associated with different types of attributes of the object. (Sal [0015] The trained model can accept the image data for the region as input, and output a set of attributes and values. The attributes can be visual or stylistic attributes that were determined to be exhibited by the representation of the item in the image region. The values can be confidence or certainty values for those elements, or can represent a prominence of the attributes in the image, among other such options. The set of attributes can be used to generate an attribute or feature vector in some embodiments, while the set of attributes and values can be used in others. These values or vectors can be compared against a set of similar types of values or vectors that were generated for other items, which are indicative of attributes and values for those items. [0024]specific types, as may include feature points that match various patterns or relationships. The coordinate or bounding box information in this example can be provided, with at least the relevant image data (or just the image data for the relevant portions) to a classifier 228 or other such system or service, that is able to analyze the image data for a specific region and classify the type of item represented in the image region, at least for known classifications of items. In various embodiments the classifier and localizer might be part of the same component, process, or service. The classifier can analyze the image data and recognize a class or type of item in each region, at least where such a classification can be determined with at least a minimum level of confidence or certainty. [0035] For certain types of items, there may be prioritizations of attributes overall or for a specific user, etc. Thus, when determining similar items, the items can be ranked not only according to the attributes and values, but also taking into account the prioritizations and weightings, such that similarity of color might be weighted more heavily than length or shape for certain types of items, among many other such options. The number of attributes or values considered can also be reduced if not enough results can be obtained, or the thresholds or tolerances can be reduced, etc. Other metrics can be used to adjust the rankings as well, as may include customer rating, sales velocity, price, popularity, and the like.[0037] Relationships between entities (e.g. Item A, Color Blue) can be modeled via edges (e.g. Item A has property Color Blue). Weightedness is the ability to assign a specific numerical weight to each edge (e.g. Item A has a 60%/0.6 weight of property Color Blue). This lets data be modeled with greater granularity, which allows weighted queries to be performed to obtain more precise results...)
Regarding claim 42, Sal and Babenko teach The method of claim 34, wherein the key attributes of the object comprise at least one of a make or a model of the object. (Sal [0011] These regions of image data can be analyzed using a classifier to attempt to determine a classification, type, or category of item represented in that region. Once a category has been determined, a machine learning model or other statistical prediction algorithm trained on data for that category can be used to process the image data for that region. The trained model can accept the image data for the region as input, and output a set of attributes and values. The attributes can be visual or stylistic attributes that were determined to be exhibited by the representation of the item in the image region. The values can be confidence or certainty values for those elements, or can represent a prominence of the attributes in the image, among other such options. The set of attributes can be used to generate an attribute or feature vector in some embodiments, while the set of attributes and values can be used in others. These values or vectors can be compared against a set of similar types [0016] a classifier to determine a type of item represented in each image. For each class of item, a trained machine learning model, or other such process or algorithm, can be used to analyze the corresponding image region to identify visual attributes of the item. These can include attributes relating to length, color, pattern, cut, width, shape, hemline, neckline, silhouette, occasion type, and the like. The model...[21] type of item where there may be many options available, and a user can specify a specific style or set of attributes that are of interest. This may be particularly relevant for items such as clothing or furniture, where there may be many variations of a type of item available with varying sets of attributes, as well as weights or amounts of that attribute. For example...[23-28 and 32-35] elaborate on the at least one attribute associated with the image data comprises at least one of a make, a model, a trim, or an engine type of the object)
Regarding claim 43, Sal and Babenko teach The method of claim 21, wherein the recommended manipulation is a way to manipulate a device to increase the calculated match score. (Babenko [0035] Smile detection module 308 can be configured to detect whether the face detected in the image is smiling and assign a corresponding score. Smile detection module 308 can use an edge detection algorithm or other appropriate process to identify the shape of a mouth on the detected face and compare the detected shape to a smile template shape. The closer the detected smile matches the smile template, the higher the score output by smile detector 308. Eye detection module 310 can be configured to identify eyes on the detected face and determine whether the eyes are open or closed. Eye detection module 310 can use similar edge detection or pattern matching algorithms to determine whether a subject's eyes are open, with higher scores being assigned to images in which all detected faces are associated with open eyes. [0043] Action engine 324 can send instructions or recommended actions to a client device based on a composite score calculated for an image. Recommended actions can include actions that will improve the score of a given image. For example, when images are scored in real-time, if the subject of the image is off-center, blurry, or if the subject's eyes are closed, the recommended action can include retaking the image. Similarly, if the activity analysis module 318 scores the image highly, the recommended action can be to share the image.[0044] Scoring update module 326 can receive feedback changes to the scores of images. For example, a user can select an image displayed on a client device and manually enter a new score. Image database 302 can receive the new score and update the score associated with the image. In some embodiments, a user can change the score of an image indirectly, by reordering images that are sorted by score. For example, a user can select and image and move it to a different position (e.g., by a drag-and-drop, tap and swipe, or other input). The score of the image can then be increased or decreased based on its change in position. [46-52] elaborate on wherein the recommended manipulation is a way to manipulate a device to increase the calculated match score)
Claim 30 is rejected under 35 U.S.C. 103 as being unpatentable over Sal in view of Babenko and US 20130055160 A1; YAMADA; Hidekatsu et al. (hereinafter Yamada)
Regarding claim 30, Sal and Babenko teach The method of claim 21, the combination lacks explicitly and orderly teaching based on the match score being above a threshold, terminating an object recognition session However Yamada teaches based on the match score being above a threshold, terminating an object recognition session (Yamada [113] longer than a predetermined threshold value stored in setting data 9Z, the smartphone 1 erases the display of the icon 55 and terminates the application corresponding to the icon 55. [137] exceeds the threshold value, the smartphone 1 erases the display of the four icons 55 and terminates the applications corresponding to the respective icons 55 at Step S63.) Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to take all prior methods and make the addition of Yamada in order to improve operability of the system via termination methods (Yamada [0105] As described above, the smartphone 1 according to the first embodiment terminates the applications that correspond to the two icons 55 when a gesture in which the two icons 55 displayed on the history list 45 relatively approach each other is detected. Accordingly, the smartphone 1 can terminate the applications being executed in the foreground or background through a simple operation, and thus the operability is improved. [0106] When the above-described gesture is made, the smartphone 1 according to the first embodiment displays the icons 55 arranged on the history list 45 in a manner in which the icons 55 are squashed, and then erase the icons 55 and terminates the applications corresponding to the icons 55. Accordingly, the gesture for terminating the applications corresponds to a change of a display manner of the icons 55. Consequently, it becomes possible for a user to terminate the applications with an intu