Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending in this application.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Guerre et al., US 2018/0173725 (hereinafter De Guerre) in view of Yuen et al., US 2017/0103072 (hereinafter Yuen).
For claims 1, 8, 15, De Guerre teaches a search processing method performed by a computer device, the method comprising:
receiving a search operation…in a conversation service interface (see De Guerre, [0028], [0033], “search keywords related to a topic of a message conversation between a first user and one or more second users” that includes “keywords obtained from image/video assets 225” and user initiating a “search query” to a “search engine”).
Yuen teaches “receiving a search operation on multimedia data displayed in a conversation service interface” (see Yuen, [0014], [0037], “image search” along with “image tags” represents search operation on multimedia data). Yuen further teaches “displaying a plurality of search tags extracted from the multimedia data adjacent to the multimedia content” (see Yuan, [0038] – [0041], Figs. 2, 4-5, “depicted in interface 200 is image tag 202A and image tag 202B being provided with the image 202”) and “in response to a user selection of one of the plurality of search tags as a target tag, initiating a search requestion, the search request including a keyword corresponding to the target tag” (see Yuan, [0038] - [0039], “image tags” are “selectable to initiate a new search query”). It would have been obvious to one skilled in the art at the time of the invention to modify the teachings of De Guerre with the teachings of Yuan to provide relevant search results for images based on derive image tags (see Yuan, [0012], [0014], [0038]).
The combination further teaches
in response to the search operation:
displaying a search result interface, the search result interface including a search result associated with the target tag (see De Guerre, [0032] – [0033], “Search engine 140 can return search results” and “Search results 240 can include expressive graphical content and metadata and keywords associated with the expressive graphical content” where “keywords” associated with content represents search results associated with target tag, and where displayed “metadata” represents tag associated with keyword, [0045]; see Yuan, [0038] – [0039], “images 302 are then provided that are associated with the new search query 301”); and
in response to a user selection of the search result, displaying the search result and the multimedia data in the conversation service interface, wherein the search result and the multimedia data are to be shared with a second user (see De Guerre, [0027], “presents a plurality of the ranked images/videos to the user for selection and sharing in a message” and “In response to a user selecting an image/graphic from the presented images/video, the selected image/graphic can be automatically shared in a message” represents displaying multimedia data to be shared with second user; see Yuan, [0038] – [0039], displaying image search results that includes multimedia data).
For claims 2, 9, 16, the combination teaches the method according to claim 1, wherein the multimedia data is displayed in the conversation service interface in form of a social conversation message transmitted from the second user (see De Guerre, [0027], content “shared” within “message application” with second user).
For claims 3, 10, 17, the combination teaches the method according to claim 1, wherein the method further comprises:
before receiving the search operation on the multimedia data:
displaying an option bar on the conversation service interface in response to a trigger event for the multimedia data on the conversation service interface, the option bar comprising a search option (see De Guerre, [0032], displaying “a ‘search’ control in a user interface of message application” for user to trigger before searching for content); and
determining, in response to detecting a selection operation on the search option, that the search operation for the multimedia data is triggered (see De Guerre, [0032] – [0033], and in response to selection, “search query is issued automatically in response to selecting the keyword(s)”).
For claims 4, 11, 18, the combination teaches wherein the method further comprises:
in response to the user selection of the search result, displaying a sharing window, the sharing window comprising the search result and the multimedia data and a sharing control (see De Guerre, [0042], where “Selection of a content item in the suggestion control 307 shares the content item in a text 308” represents sharing control to share multimedia data); and
in response to detecting a user selection of the sharing control, transmitting the search result and the multimedia data to the second user (see De Guerre, [0042], where “selection” results in sharing content item, representing multimedia data).
For claims 5, 12, the combination teaches wherein the method further comprises:
before displaying the search result interface:
displaying a search object window in response to the search operation on the multimedia data in the conversation service interface, the search object window comprising an object identifier of at least one search application object (see de Guerre, [0028], [0032] – [0033], “search keywords can be presented to the user in response to a user selecting a ‘search’ control from a user interface of message application” and happens before “search query is issued” to “return search results,” and where “selecting” of keyword objects within “interface of message application 210” represents object identifier of an application object); and
determining, in response to a selection operation performed in the search object window, M target object identifiers in the object identifier of the at least one search application object, M being a positive integer, and triggering search application objects indicated by the M target object identifiers to perform search processing (see De Guerre, [0028], [0033], [0045], “facilitate multi-keywords selection with a single selection” where “user can simultaneously select a plurality of keywords” within “interface of message application 210” in order to initiate a search represents selection of M target object identifiers),
wherein search results of search application objects indicated by N target object identifiers is to be displayed on the search result interface, and N being a positive integer less than or equal to M (see De Guerre, [0027], [0032] – [0033], [0045], “search can be automatically performed using the selected keywords” to return “search results” comprising “ranked images/videos” where an exemplary number of ranked search results is less than or equal to exemplary number of selected keywords).
For claims 6, 13, 19. The method according to claim 1, wherein the method further comprises:
in response to a user selection on a target search result in the search result interface, initiating a result application object corresponding to the target search result to display search information of the target search result (see De Guerre, [0030], [0033], Search results 240 can include…metadata” where “metadata associated with each piece of expressive graphical content [includes] popularity, recency, and/or trending information about the expressive graphical content” where “metadata” that displays data associated with search represents results application object to display of search information),
the search result displayed in the search result interface comprising result prompt information and an application object identifier of a result application object corresponding to the result prompt information (see De Guerre, [0030], [0033], [0070], where “popularity, recency, and/or trending information” represents result prompt information and “sharing scores” and/or “trending scores” represent application object identifier).
For claims 7, 14, 20. The method according to claim 1, wherein the method further comprises:
before displaying the search result interface:
performing feature analysis on the multimedia data to obtain at least one key feature of the multimedia data (see De Guerre, [0033], “search API 220 can be used to rank the expressive graphical content in the search results 240. Expressive graphical content can be ranked using by any/all of the relevance of the graphical content to keywords of the chat transcript, recency of the expressive graphical content, whether the expressive graphical content is trending, and how many times the expressive graphical content has been shared, such as on social media” where ranking of items based on relevancy, recency, trending, and/or sharing represents performed feature analysis to obtain key features);
establishing a search tag corresponding to the at least one key feature (see Guerre, [0028], [0033], establishing “keywords…obtained from the image/video search results” represents search tag); and
performing search processing according to a target key feature corresponding to the target tag to obtain the search result associated with the target tag (see Guerre, [0028], [0033], where ranking of search results according to keywords represents search according to targe key feature tag).
Response to Arguments
Applicant’s arguments with respect to claim(s) rejected under 35 U.S.C. 103 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENSEN HU whose telephone number is (571)270-3803. The examiner can normally be reached Monday - Friday 9-5 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JENSEN HU/Primary Examiner, Art Unit 2169