DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
2. Claims 1-20 are pending.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted on 9/9/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
4. The drawings have been reviewed and are accepted as being in compliance with the provisions of 37 CFR 1.121.
Double Patenting
5. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and, In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent is shown to be commonly owned with this application. See 37 CFR 1.130(b).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of US 12,298,985 Although the claims at issue are not identical, they are not patentably distinct from each other because claims 1-20 of the instant application substantially recite the limitations of claims 1-20 of the cited US 12,298,985 for generating hexadecimal trees to compare files. The claim merely omits certain bolded limitations as shown in comparison table below, and replace them with .
Claim 1 (instant application)
Claim 1 (US 12,298,985)
1. A computing system comprising:
one or more processors; one or more non-transitory computer readable media that collectively store instructions that,
when executed by the one or more processors, cause the computing system to perform operations, the operations comprising:
obtaining, a data encoding, wherein the data encoding comprises an encoding of a query image, wherein the query image comprises one or more image features;
processing the query image with an image annotator to identify one or more query image labels, wherein the one or more query image labels label one or more objects in the query image
and one or more classes of objects in the images;
determining one or more entities associated with the one or more query image labels,
wherein the one or more entities comprise specific instances of the one or more classes of objects; determining a plurality of candidate search queries based at least in part on the one or more entities; determining a context associated with the query image; determining a representative search query of the plurality of candidate search queries based at least in part on the context associated the query image;
obtaining a search results page associated with the representative search query; and
providing the search results page for display.
1. A computing system comprising: one or more processors; one or more non-transitory computer readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising:
obtaining, a data encoding, wherein the data encoding comprises an encoding of a query image, wherein the query image comprises one or more image features; processing the query image with an image annotator to identify one or more query image labels, wherein the one or more query image labels label one or more objects in the query image and wherein the one or more query image labels comprise coarse grained image labels, wherein the coarse grained image labels label one or more classes of objects;
determining one or more entities associated with the one or more query image labels,
wherein the one or more entities comprise specific instances of the one or more classes of objects; determining a plurality of candidate search queries based at least in part on the one or more entities; determining a context associated with the query image; determining a representative search query of the plurality of candidate search queries based at least in part on the context associated the query image;
obtaining a search results page associated with the representative search query; and providing the search results page for display.
Table 1
Therefore, it would have been obvious to one of ordinary skill in the art of data processing at the time the invention was made to modify the invention as claimed in the instance application by substituting and wherein the one or more query image labels comprise coarse grained image labels, wherein the coarse grained image labels label, since an omission and addition of a cited limitation would have not changed the process according to which the method and system as claimed.
Therefore, the use of having a description of the labeling of the query image as coarse grained would be an obvious variation in the art for the purpose of achieving the same end results having a more specific and detail image and would not interfere with the functionality of the steps previously claimed and would perform the same function.
The dependent claims 2-11, 13-16, and 18-20 are rejected for fully incorporating the errors of their respective base claims by dependency.
Claim Rejections - 35 USC § 101
6. 35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
7. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
Regarding Claims 1, 12, and 17:
Step 1: The claims are directed to a process/method and system for perform operations comprising query images and query features.
Step 2A Prong 1: Claims 1, 12, and 17 recite "determining one or more entities...";
"determining a plurality of candidates..."; "determining a context..."; and
"determining a representative search query...". These limitations are processes that, under their broadest reasonable interpretation, covers performance of the limitation in the mind, but for the recitation of generic computer components. That is, other than reciting "processors" or "computer system", nothing in the claim element precludes the step from practically being performed in a human mind or with the aid of pen and
paper.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind, then it falls within the “Mental Processes” grouping of abstract ideas (concepts performed in the human mind including an observation, evaluation, judgement, and opinion).
Step 2A Prong 2: The judicial exception is not integrated into a practical application. The claim recites the additional elements “obtaining ". this limitation amounts to data gathering which is considered to be insignificant extra solution activity (MPEP 2106.05(g); "processing..." and "providing.."; this limitation is a mere generic transmission and presentation of collected and analyzed data which is considered to be insignificant extra solution activity. The judicial exception is not integrated into a practical application. These limitations amount to a data gathering step and a mere generic transmission and presentation of collected and analyzed data which is considered to be insignificant extra solution (see MPEP 2106.05(g)).
The computing system, one or more processors and one or more non-transitory computer-readable storage media in these steps are recited at a high-level of generality (i.e., as a generic processor performing a generic computer function) such that it amounts no more than mere instructions to apply the exception using a generic computer component. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. (See MPEP 2106.05(f)). The claim is directed to an abstract idea.
Step 2B: The claim does not include additional elements that are sufficient to
amount to significantly more than the judicial exception. The limitations
"obtaining ..."; "processing..." and "providing..." are recognized by the courts as
well-understood, routine , and conventional activities when they are claimed
in a merely generic manner (see MPEP 2106.05(d)(II)(iv) Storing and retrieving information in memory, Versata Dev. Group Inc.
Also see Mortgage Grader, Inc. v. First Choice Loan Services Inc., NYLX, Inc (Data Comparison).
Claim 2, dependent of Claim 1, recites generating a respective relevance score for each of the plurality of candidate search queries comprises, for each candidate search query: determining a popularity of the candidate search query; and based on the determined popularity, generating a respective relevance score for the candidate search query: wherein the representative search query is determined based at least in part on the one or more entities, the context, and the relevance scores. The additional limitation elaborates on the abstract idea, “determining a popularity of the candidate search query” when is well-understood routine and conventional. Data gathering and comparing for popularity.
Claim 3, dependent of Claim 1, recites “identifying, for one or more of the entities, one or more candidate search queries, wherein the one or more candidate search queries are textual search queries and wherein the one or more candidate search queries are different than one or more terms associated with the one or more entities” The additional limitations of “identifying” searching or comparing as broadly interpreted, is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claim 4, dependent of Claim 1, recites: generating a respective relevance score for each of the candidate search queries; and selecting the representative search query for the query image based at least on the generated respective relevance scores” The additional limitation elaborates on the abstract idea, “relevance scores” data gathering and comparing for relevance, well-understood routine and conventional.
Claim 5, dependent of Claim 1, recites “generating a search results page using the candidate search query: analyzing the generated search results page to determine a measure indicative of how interesting and useful the search results page is; and based on the determined measure, generating a respective relevance score for the candidate search query.” The additional limitation of “analyzing” is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g).
Claim 6, dependent of Claim 1, recites “wherein the one or more query image labels tag the one or more image features in the query image.” The additional limitations of data gathering and labeling is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claim 7, dependent of Claim 1, recites “wherein the one or more image features comprise one or more coarse-grained features.” The additional limitations are simply searching features, coarse graining, is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)” is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claim 8, dependent of Claim 1, recites “wherein the one or more image features comprise one or more fine-grained features”. The additional limitations are simply searching features, fine graining, is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claim 9, dependent of Claim 1, recites “wherein the search results page comprises a plurality of search results responsive to the representative search query”. The additional limitations of “search results” is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claim 10, dependent of Claim 1, “wherein the query image comprises an image found on a website accessed by a user device.” The additional limitations of “an image found on a website” is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claim 11, dependent of Claim 1, recites “wherein the search results page comprises a knowledge panel, wherein the knowledge panel comprises general information associated with the one or more entities associated with the one or more query image labels.” The additional limitations of “knowledge panel” is considered to be insignificant extra solution activities to the judicial exception, see MPEP 2106.05(g)
Claims 13-16 and 18-20 recite similar limitations and therefore are rejected under the same reasons as explained above.
Claim Rejections - 35 USC § 103
8. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
9. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rosenberg et al (US 9,053,115) in view of Sharifi (US 2015/0052121), hereinafter “Rosenberg” and “Sharifi” respectively.
The applied reference has a common assignee or Inventor with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2).
This rejection under 35 U.S.C. 103 might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C.102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B); or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. See generally MPEP § 717.02.
As per Claim 21, Rosenberg discloses:
A computing system comprising: one or more processors; one of more non-transitory computer readable media that collectively store instructions that, when executed by the one or more processors, cause the computing system to perform operations, the operations comprising: obtaining, a data encoding, wherein the data encoding comprises an encoding of a query, wherein the query image comprises one or more image features; (Col. 2, lines 30-52, “generating, in the data processing system, a feature distance being a distance from corresponding query image feature scores; training, in the data processing system, an image similarity model based on the similarity feedback data and the feature distances, the image similarity model being trained to identify one or more candidate images that are visually similar to the query image based on image feature scores of the one or more candidate images; generating a visual similarity score for a candidate image…”)
processing the query image with an image annotator to identify one or more query image labels, (Col. 9, lines 61-67, “Image Label Subsystem” “image labels can be used to identify images that are related to a query”, the image label subsystem, being the “image annotator”)
wherein the one or more query image labels label one or more objects in the query image; (Col. 9, lines 61-67, “image labels can be used to identify images that are related to a query” and Col. 5, lines 50-56, “For example, two images are semantically related if the two images are described as containing the same or related objects” and see Figures 4 and 6)
determining one or more entities associated with the one or more query image labels; (Col. 7, lines 60-65, “Labels can be derived from text that is associated with candidate images and the query image. A label is data that specifies subject matter to which an image is related. For example, an image of a baseball can have the labels "baseball" and/or "sports" associated with it” and see Figures 4 and 6) determining a plurality of candidate search queries based at least in part on the one or more entities; (Col. 4, lines 22-28, “A web site 104 is one or more resources associated with a domain name and hosted by one or more servers. An example web site is a collection of web pages formatted in hypertext markup language (HTML) that can contain text, images, multimedia content, and programming elements, e.g., scripts. Each web site 104 is maintained by a publisher, e.g., an entity that manages and/or owns the web site.” The owner and manager of the web site providing the search queries. And Col. 5, lines 25-33, “…user sessions are stored in a data store such as the historical data store 114. Selection data specifying actions taken in response to search results provided are also stored in a data store such as the historical data store” the relationship and determination based on user session and historical data being based on the “one or more entities” association as claimed.) determining a context associated with the query image; (Col. 28, lines 19-30, “Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination…”) determining a representative search query of the plurality of candidate search queries based at least in part on the context associated the query image; (Col. 8, lines 34-40, “Relevance scores in the context of the candidate images can be computed based wholly or in part on the visual similarity scores, i.e., the term can be either the visual similarity score or a score that is a function of the visual similarity score and one or more other scores.”) obtaining a search results page associated with the representative search query; and providing the search results page for display. (col. 1, lines 35-38, “an image presented on that web page to generate an overall search result score for the image.” And Col. 2, lines 1-10, “relevance scores for the candidate images based on the visual similarity scores, each relevance score being a relevance measure of a respective candidate image to the query image; generating a ranking of the candidate images; and selecting a highest ranking subset of the candidate images to be referenced by image search results.”)
Rosenberg discloses concepts associated do not specifically discloses the “entity” or “entities” associated.
Sharifi discloses the above mentioned limitations as follows: (Par [0009], “consumed by the user, or indicating that the entity that is associated with the one or more query terms of the search query is associated with a media item that has been indicated as consumed by the user in the media consumption database; determining that the entity that is associated with the one or more query terms of the search query is identified, in the media consumption database that identifies one or more media items that have been indicated as consumed by the user”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to incorporate the teachings of Sharifi specifically entities associated with image features or labels into the method of Rosenberg to take advantage on applying a statistical technique to compare the best scored, ranked, image with its pertaining entities. The modification would have been obvious because one of the ordinary skills in the art would implement the best comparison to provide the best personalized result.
As per Claim 2, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the operations further comprise: generating a respective relevance score for each of the plurality of candidate search queries comprises, . (col. 2, lines 1-10, “…system, relevance scores for the candidate images based on the visual similarity scores, each relevance score being a relevance measure of a respective candidate image to the query.”)
generating a respective relevance score for the candidate search query: (col. 2, lines 20-30, “identifying, in the data processing system, relevance scores for the candidate labels, each relevance score being a measure of relevance of the text of the respective candidate label to query image; and selecting, in the data processing system, second labels for the query image based on the relevance scores.”)
wherein the representative search query is determined based at least in part on the one or more entities, the context, and the relevance scores. (Col. 2, lines 1-10, “relevance scores for the candidate images based on the visual similarity scores, each relevance score being a relevance measure of a respective candidate image to the query image; generating a ranking of the candidate images; and selecting a highest ranking subset of the candidate images to be referenced by image search results.”).
Rosenberg does not specifically disclose the “for each candidate search query: determining a popularity of the candidate search query; and based on the determined popularity”
Sharifi discloses the above mentioned limitations as follows: (Par [0164], “the entity recognition engine 640 can determine an importance measure or popularity measure associated with the potential entities, and can select the most important or most popular of the potential entities as the entity referenced by the user-input query. For example, the entity recognition engine 64… The Rolling Stones" may have a greater measure of importance or popularity than "Rolling Stone Records." Based on the band "The Rolling Stones" having a greater measure of importance or popularity, the entity recognition engine 640 can identify the band "The Rolling Stones" as the entity referenced by the user-input query.” See Figure 6).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to incorporate the teachings of Sharifi specifically a popularity score related to an image feature or label into the method of Rosenberg to take advantage on applying a statistical technique to compare the best scored, ranked, image, by popularity. The modification would have been obvious because one of the ordinary skills in the art would implement the best comparison to provide the best result.
As per Claim 3, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein determining the representative search query of the plurality of candidate search queries based at least in part on the context associated the query image comprises: identifying, for one or more of the entities, one or more candidate search queries, (Col. 1, lines 60-67, “identifying, in a data processing system, candidate images matching the query labels; generating, in the data processing system, visual similarity scores for the candidate images, each visual similarity score representing a visual similarity of a respective candidate image to the query image”
wherein the one or more candidate search queries are textual search queries and wherein the one or more candidate search queries are different than one or more terms associated with the one or more entities. (Col. 1, lines 30-37, “search system can determine the relevance of an image to a text query based on the textual content of the resource in which the image is located and also based on relevance feedback associated with the image.” And Col. 6, lines 16-25, “For example, scaled versions of the query image QI and different colored versions of the query image QI can be identified.”).
Rosenberg discloses concepts associated do not specifically discloses the “entity” or “entities” associated.
Sharifi discloses the above mentioned limitations as follows: (Par [0009], “consumed by the user, or indicating that the entity that is associated with the one or more query terms of the search query is associated with a media item that has been indicated as consumed by the user in the media consumption database; determining that the entity that is associated with the one or more query terms of the search query is identified, in the media consumption database that identifies one or more media items that have been indicated as consumed by the user”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to incorporate the teachings of Sharifi specifically entities associated with image features or labels into the method of Rosenberg to take advantage on applying a statistical technique to compare the best scored, ranked, image with its pertaining entities. The modification would have been obvious because one of the ordinary skills in the art would implement the best comparison to provide the best personalized result.
As per Claim 4, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein determining the representative search query of the plurality of candidate search queries based at least in part on the context associated the query image comprises: generating a respective relevance score for each of the candidate search queries; (Col. 2, lines 1-10, “relevance scores for the candidate images based on the visual similarity scores, each relevance score being a relevance measure of a respective candidate image to the query image; generating a ranking of the candidate images; and selecting a highest ranking subset of the candidate images to be referenced by image search results.”)
and selecting the representative search query for the query image based at least on the generated respective relevance scores. (Col. 2, lines 5-10, “and selecting a highest ranking subset of the candidate images to be referenced by image search results.”).
As per Claim 5, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the operations further comprise: generating a respective relevance score for each of the candidate search queries comprises, for each candidate search query: (Col. 2, lines 1-10, “relevance scores for the candidate images based on the visual similarity scores, each relevance score being a relevance measure of a respective candidate image to the query image; generating a ranking of the candidate images; and selecting a highest ranking subset of the candidate images to be referenced by image search results.”)
generating a search results page using the candidate search query: (Col. 17-24, “The user devices 106 receive the search results, e.g., in the form of one or more web pages, and render the pages for presentation to users.”)
analyzing the generated search results page to determine a measure indicative of how interesting and useful the search results page is; (Col. 5, lines 34-45, “initial set of images identified by a text query may be referenced on a user device 106 by search results 111. The user of the user device 106 can then select one or more of the search results 111, e.g., select an image thumbnail, and submit a request to the search system 110 to perform an image search using the image as a query image. The image search subsystem 120 then processes the query image to identify images that are visually and semantically related to the query image.”)
and based on the determined measure, generating a respective relevance score for the candidate search query. (Col. 5, lines 46-50, “The term "visual relatedness" refers to the visual similarity of images as measured by visual features of the images…”).
As per Claim 6, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the one or more query image labels tag the one or more image features in the query image. (Col. 6, lines 1-5, “The image similarity subsystem 124 generates data representing the visual similarity of images to the query image. The image label subsystem 126 generates data indicative of the topic or subject matter to which images are related.”).
As per Claim 7, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the one or more image features comprise one or more coarse-grained features. (Col. 8 and 9, lines 57-67 and 1-6, respectively “In some implementations, the image features can include color, texture, edges and other characteristics. Image feature scores can be computed, for example, for images during the crawling that is performed by the search system 110 or the image search system 120. The image feature scores can be computed at two or more image scales so that visual similarities between images at different visual scales can be more accurately determined. Example processes for extracting values of image features from which a feature score can computed include processes for generating color histograms, texture detection processes (e.g., based on spatial variation in pixel intensities), scale-invariant feature transform, edge detection, corner detection and geometric blur.”).
As per Claim 8, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the one or more image features comprise one or more fine-grained features. (Col. 8 and 9, lines 57-67 and 1-6, respectively “In some implementations, the image features can include color, texture, edges and other characteristics. Image feature scores can be computed, for example, for images during the crawling that is performed by the search system 110 or the image search system 120. The image feature scores can be computed at two or more image scales so that visual similarities between images at different visual scales can be more accurately determined. Example processes for extracting values of image features from which a feature score can computed include processes for generating color histograms, texture detection processes (e.g., based on spatial variation in pixel intensities), scale-invariant feature transform, edge detection, corner detection and geometric blur.”).
As per Claim 9, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the search results page comprises a plurality of search results responsive to the representative search query. (Col. 6, lines 26-35, “… image search queries, and in response the image search subsystem 120 can receive corresponding sets of image search results 134 responsive to the low confidence labels, i.e., LISR1 . . . LISRk. The image search results 134 include references to results images that were identified in response to the image search queries.”).
As per Claim 10, the rejection of Claim 1 is incorporated and Rosenberg further discloses: wherein the query image comprises an image found on a website accessed by a user device. (col. 1, lines 30-38, “…an information retrieval score measuring the relevance of a text query to the content of a web page can be combined with a click through rate of an image presented on that web page to generate an overall search result score for the image.”).
As per Claim 11, the rejection of Claim 1 is incorporated and Sharifi further discloses: wherein the search results page comprises a knowledge panel, wherein the knowledge panel comprises general information associated with the one or more entities associated with the one or more query image labels. (Par [0149], “During operation (C), information selected for presentation to the user 512 in a knowledge card is presented to the user 512 in the knowledge card 516. For instance, the knowledge card 516 is presented to the user 512 in the user interface 514, and can include information identifying an upcoming concert that features "The Rolling Stones…” and see Figure 6, including the “knowledge engine” being the “panel” as claimed.).
Therefore, it would have been obvious to a person of ordinary skill in the art at the effective filing date to incorporate the teachings of Sharifi specifically a panel with knowledge related to an image feature or label into the method of Rosenberg to take advantage on applying a statistical technique to compare the best scored, ranked, image. The modification would have been obvious because one of the ordinary skills in the art would implement the best comparison to provide the best result.
As per Claims 12-20, being the method and non-transitory computer readable claims corresponding to the method claims 1-11 respectively and rejected under the same reason set forth in connection of the rejections of Claims 1-11 and further Rosenberg discloses: (Cols. 8 and 9, lines 57-67 and 1-6).
Conclusion
10. The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Jing; Yushi, US-7961986-B1 relates to method that includes determining a score for an image of a plurality of images with respect to each of one or more terms, identifying one or more of the terms for each of which the score for the image with respect to the respective identified term satisfies a criterion
Brandt; Jonathan, US-9317534-B2, relates to combined semantic descriptions and visual attribute search.
Wang; Zhaowen, US-20150120760-A1, relates to The system may then query a database of tagged images by submitting the set of vectors as search criteria to a search engine. The querying of the database may obtain a set of tagged images.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGELICA RUIZ whose telephone number is (571)270-3158. The examiner can normally be reached M-F 10:00 am to 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571) 270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANGELICA RUIZ/Primary Examiner, Art Unit 2154 January 14, 2026