Prosecution Insights
Last updated: April 19, 2026
Application No. 18/237,585

SYSTEMS AND METHODS FOR PROCESSING EMOJIS IN A SEARCH AND RECOMMENDATION ENVIRONMENT

Final Rejection §101§103§DP
Filed
Aug 24, 2023
Examiner
PEACH, POLINA G
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Adeia Guides Inc.
OA Round
4 (Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
3y 7m
To Grant
73%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
229 granted / 461 resolved
-5.3% vs TC avg
Strong +23% interview lift
Without
With
+23.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
34 currently pending
Career history
495
Total Applications
across all art units

Statute-Specific Performance

§101
17.9%
-22.1% vs TC avg
§103
49.9%
+9.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 461 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of the Claims Claims 51, 57, 61, 67 have been amended. Claims 51-54, 56-64 and 66-70 are pending. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claim 51-54, 56-64 and 66-70, are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claim 1-17 of US Patent US 11,775,583. Although the conflicting claims are not identical, they are not patentably distinct from each other because of following reasons: Although US Patent 11,775,583 doesn’t explicitly disclose the limitation “wherein the metadata emoji is different from the query emoji, and wherein the metadata emoji and the query emoji are untranslated.” However, claim 2 of the US Patent 11,775,583 disclose “translating the emoji portion into text; determining, for each of the videos, a second textual match score based on the translated emoji portion,” which implicates and is obvious to conclude that the metadata emoji and the query emoji are initially untranslated. Thus, US Patent 11,775,583 contain(s) every element of claims 51-54,56-64 and 66-70 of the instant application 18/237585 and thus, anticipate or obvious the claim(s) of the instant application. Claims of the instant application 18/237585, therefore are not patently distinct from the earlier patent claims and as such are unpatentable over obvious-type double patenting. A later patent/application claim is not patentably distinct from an earlier claim if the later claim is anticipated by the earlier claim. A later patent claim is not patentably distinct from an earlier patent claim if the later claim is obvious over, or anticipated by, the earlier claim. In re Longi, 759 F.2d at 896, 225 USPQ at 651 (affirming a holding of obviousness-type double patenting because the claims at issue were obvious over claims in four prior art patents); In re Berg, 140 F.3d at 1437, 46 USPQ2d at 1233 (Fed. Cir. 1998) (affirming a holding of obviousness-type double patenting where a patent application claim to a genus is anticipated by a 35 patent claim to a species within that genus). " ELI LILLY AND COMPANY v BARR LABORATORIES, INC., United States Court of Appeals for the Federal Circuit, ON PETITION F£)R REHEARING EN BANC (DECIDED: May 30, 2001). The dependent claims are anticipated or obvious by the species of the patented invention. Cf., Titanium Metals Corp. v. Banner, 778 F.2d 775,227 USPQ 773 (Fed. Cir. 1985) (holding that an earlier species disclosure in the prior art defeats any generic claim). This court's predecessor has held that, without a terminal disclaimer, the species claims preclude issuance of the generic application. In re Van Ornum, 686 F.2d 937, 944, 214 USPQ 761,767 (CCPA 1982); Schneller, 397 F.2d at 354. Accordingly, absent a terminal disclaimer. The dependent claims were properly rejected under the doctrine of obviousness-type double patenting." (In re Goodman (CA FC) 29 USPQ2d 2010 (12/3/1993). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 51-54, 56-64 and 66-70 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more. The USPTO Guidance recites: (1) any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activity such as a fundamental economic practice, or mental processes) (Step 2A, Prong 1); and (2) additional elements that integrate the judicial exception into a practical application (Step 2A, Prong 2). MPEP §§ 2106.04(a), (d). Only if the claim (1) recites a judicial exception and (2) does not integrate that exception into a practical application, do we then look in Step 2B to whether the claim: (3) adds a specific limitation beyond the judicial exception that is not “well-understood, routine, conventional” in the field; or (4) simply appends well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception. MPEP § 2106.05(d). Step 2A, Prong One: Is a Judicial Exception Recited? First, determine whether the claims recite any judicial exceptions, including certain groupings of abstract ideas (i.e., mathematical concepts, certain methods of organizing human activity, or mental processes). MPEP § 2106.04(a). Claim 51 recites – ▪ receiving a query comprising a query emoji (a generic computer functions of receiving and processing that are well-understood, routine, and conventional activities previously known to the industry. Extracting caption data and natural text processing are merely extra-solution activities and does not meaningfully limit the independent claims. Generic computer implementation does not provide significantly more than the abstract idea. Also an abstract idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process”— a user can manually search for a VHS tape with an intended emotional content) ▪ accessing metadata for a video, the metadata comprising a set of emojis generated based at least in part on reactions to the video received from a plurality of user devices in communication with a social platform (Abstract Idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a user can make observation of content associated with the video. For example a markings on a visual tape (VHS tap) can provide such information); ▪ wherein each emoji of the set of emojis has a corresponding first matching score for the video based in part on instances of the respective emoji in the reactions to the video (Abstract Idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a mathematical evaluation, which can be performed by the user); ▪ selecting, from the set of emojis of the metadata, a metadata emoji based at least in part on determining the corresponding first matching score for the video meets or exceeds a first score threshold (Abstract Idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a user can apply mathematical scoring on data to select based matching emojis), ▪ determining a second matching score between the query emoji and the metadata emoji, wherein the metadata emoji is different from the query emoji (Abstract Idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a user can analyze matching emotions depicted in the tape metadata differs from the intended search emotions); ▪ wherein the metadata emoji and the query emoji are untranslated (Abstract Idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion); ▪ determining that the second matching score of the query emoji and the metadata emoji meets or exceeds a second score threshold (An abstract idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process”— a user can mentally determine a threshold of a match based on logical reasoning ); ▪ based at least in part on (i) the first matching score for the metadata emoji corresponding to the video meeting or exceeding the first score threshold and (ii) determining that the second matching score of the query emoji and the metadata emoji meets or exceeds the second score threshold, determining that the query emoji matches the video (An abstract idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process”— a user can mentally determine a threshold of a match based on logical reasoning ); ▪ in response to determining that the query emoji matches the video, providing for display a representation of the video as a query result (Abstract Idea of a mental process, see MPEP § 2106.04(a)(2)(III). Under the broadest reasonable interpretation, this limitation is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a user can determine intended video content based on logical match); These limitations, based on their broadest reasonable interpretation, recite a mental process, i.e. a judicial exception. For these reasons, the independent claim 51, as well as independents claim 61, which include limitations commensurate in scope with claim 51, recite a judicial exception. A method, like the claimed method, “a process that employs mathematical algorithms to manipulate existing information to generate additional information is not patent eligible.” See Digitech Image Techs, LLC v. Elecs. for Imaging, Inc., 758 F.3d 1344, 1351 (Fed. Cir. 2014). See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350 (Fed. Cir. 2016) where collecting information, analyzing it, and displaying results from certain results of the collection and analysis was held to be an abstract idea. See In re Meyer, 688 F.2d 789, 795—96 (CCPA 1982), which held that “a mental process that a neurologist should follow” when testing a patient for nervous system malfunctions was not patentable. Accordingly, the claims recite an abstract idea. Step 2A, Prong Two: Is the Abstract Idea Integrated into a Practical Application? Next determine whether the claims recite additional elements that integrate the judicial exception into a practical application (see MPEP §§ 2106.05(a)-(c), (e)-(h)). To integrate the exception into a practical application, the additional claim elements must, for example, improve the functioning of a computer or any other technology or technical field (see MPEP § 2106.05(a)), apply the judicial exception with a particular machine (see MPEP § 2106.05(b)), or apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment (see MPEP § 2106.05(e)). Additional elements: ▪ communication circuitry configured to receive a query (Amount to “Apply it”. Merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, see MPEP § 2106.05(f). Examiner’s note: high level application of using communication circuitry amount to merely invoking a computer component to apply the exception); ▪ providing for display a representation of the video as a query result (Adding insignificant extra-solution activity to the judicial exception - see MPEP § 2106.05(g)); The term “additional elements” for claim features, limitations, or steps that the claim recites beyond the identified judicial exception. Claim 51 does not recite any additional elements beyond identified mental process. Claim 51 does not refer to any computer elements and is purely a mental process of searching video content, such as VHS tape for example, based on visual markings. Claim 61 recites the additional elements of “communication circuitry,” however, do not recite any improvements to these additional element, nor does the claims recite any particularly programmed or configured computer system, device. Rather, the additional elements in claims 51 and 61 serve merely to automate the abstract idea. See Int’l Bus. Machs. Corp. v. Zillow Group, Inc., 50 F. 4" 1371, 1382 (Fed. Cir. 2022) (“[A] patent that ‘automate[s] “pen and paper methodologies” to conserve human resources and minimize errors’ is a ‘quintessential “do it on a computer” patent’ directed to an abstract idea.”) (quoting Univ. of Fla. Rsch. Found., Inc. v. Gen. Elec. Co., 916 F.3d 1363, 1367 (Fed. Cir. 2019)). Therefore, none of these recited additional elements, whether considered individually or in combination, integrates the judicial exception into a practical application. The additional elements listed above that relate to computing components are recited at a high level of generality (i.e., as generic components performing generic computer functions such as communicating and processing known data) such that they amount to no more than mere instructions to apply the exception using generic computing components. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Additionally, the claims do not purport to improve the functioning of the computer itself. There is no technological problem that the claimed invention solves. Rather, the computer system is invoked merely as a tool. Accordingly, the additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Therefore, these claims are directed to an abstract idea. Step 2B: The additional elements are not sufficient to amount to significantly more than the judicial exception. For these reasons, independent claims 51 and 61 are directed to an abstract idea. Step 2B: Does the Claim Provide an Inventive Concept? Next, determine whether the claims recite an “inventive concept” that “must be significantly more than the abstract idea itself, and cannot simply be an instruction to implement or apply the abstract idea on a computer.” BASCOM Glob. Internet Servs., Inc. v. AT&T Mobility LLC, 827 F.3d 1341, 1349 (Fed. Cir. 2016); see MPEP § 2106.05(d). There must be more than “computer functions [that] are “well-understood, routine, conventional activit[ies]’ previously known to the industry.” Alice Corp. v. CLS Bank Int'l, 573 U.S. 208, 225 (2014) (second alteration in original) (quoting Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 73 (2012)); see MPEP § 2106.05(d). No “inventive concept” sufficient to transform the abstract method of organizing human activity into a patent-eligible application. See MPEP § 2106.05. Rather, the additional elements identified above are merely well-understood, conventional computer components, as confirmed by the Specification. See MPEP § 2106.05(d)(1). For example, the Specification refers to the additional elements in generic terms. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements relating to computing components amount to no more than applying the exception using a generic computing components. Mere instructions to apply an exception using a generic computing component cannot provide an inventive concept. Furthermore, the broadest reasonable interpretation of the claimed computer components (i.e., additional elements) includes any generic computing components that are capable of being programmed to communicate and process known data. Additionally, the computer components are used for performing insignificant extra-solution activity and well understood, routine, and conventional functions. For example, the claimed processor and machine learning merely communicates and processes known data. Activities such as these are insignificant extra-solution activity and, therefore, well understood, routine, and conventional. See MPEP 2106.05(d); see also, e.g., OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d at 1363, 115 USPQ2d at 1092-93 (Presenting offers to potential customers and gathering statistics generated based on the testing about how potential customers responded to the offers; the statistics are then used to calculate an optimized price); CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (Obtaining information about transactions using the Internet to verify credit card transactions); Ultramercial, Inc. v. Hulu, LLC, 772 F.3d at 715, 112 USPQ2d at 1754 (Consulting and updating an activity log); Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1354-55, 119 USPQ2d 1739, 1742 (Fed. Cir. 2016) (Selecting information, based on types of information and availability of information in a power-grid environment, for collection, analysis and display); Apple, Inc. v. Ameranth, Inc., 842 F.3d 1229, 1244, 120 USPQ2d 1844, 1856 (Fed. Cir. 2016) (Recording a customer’s order); Return Mail, Inc. v. U.S. Postal Service, -- F.3d --, -- USPQ2d --, slip op. at 32 (Fed. Cir. August 28, 2017) (Identifying undeliverable mail items, decoding data on those mail items, and creating output data); Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1331, 115 USPQ2d 1681, 1699 (Fed. Cir. 2015) (Arranging a hierarchy of groups, sorting information, eliminating less restrictive pricing information and determining the price). Furthermore, limitations such as integrating account details are well-understood, routine, and conventional activity. See Alice Corp., 134 S. Ct. at 2359, 110 USPQ2d at 1984 (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log). Independent system claim 51 and 61 contain the identified abstract ideas, with the additional elements of a processor, hardware and the media, which is a generic computer component, and thus not significantly more for the same reasons and rationale above. Dependent claims 52-54, 56-60 and 62-64, 66-69 further describe the abstract idea. The additional elements of the dependent claims fail to integrate the abstract idea into a practical application and do not amount to significantly more than the abstract idea. Thus, as the dependent claims remain directed to a judicial exception, and as the additional elements of the claims do not amount to significantly more, the dependent claims are not patent eligible. As such, the claims are not patent eligible. Claims 52 and 62 - (Abstract Idea of a mental process. Under the broadest reasonable interpretation, the determining genre, as drafted, is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a user can manually determine a genre for a given content). Claims 53, 63 – (Abstract Idea of a mental process. Under the broadest reasonable interpretation, the determining genre, as drafted, is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) — a user can manually determine a marking associated with different scenes). Claims 54 and 64 - (Abstract idea of “a mathematical concept” — see MPEP § 2106.04(a)(2)(l). Note: under the broadest reasonable interpretation of the claim, the claimed invention encompasses mathematical concept (e.g., Mathematical Formula or Equations)). Claims 56, 59 and 66, 69 - (Abstract Idea of a mental process. Under the broadest reasonable interpretation, the determining genre, as drafted, is an abstract idea of “a mental process” because it recites a process that can be performed in the human mind (i.e., observation, determination, evaluation, judgment, and opinion) a user can determine that [AltContent: ] corresponds to ;) ). Claims 57, 60 and 67, 70 - (a generic computer functions of receiving and processing that are well-understood, routine, and conventional activities previously known to the industry. Extracting caption data and natural text processing are merely extra-solution activities and does not meaningfully limit the independent claims. Generic computer implementation does not provide significantly more than the abstract idea. Amount to no more than mere instructions to apply the abstract idea using a generic computer component- see MPEP 2106.05(f))). Additional elements: the additional element listed above in step 2A Prong 2 is merely instructions to be implemented on a generic computer component. Therefore, the additional element does not amount to an inventive concept, particularly when the activity is well understood or conventional (MPEP 2106.05(d)). Step 2A Prong 1: The claim does not recite any of the judicial exceptions enumerated in the 2019 PEG. Step 2A Prong 2: The judicial exception is not integrated into a practical application. Dependent claims 52-54, 56-60 and 62-64, 66-69 are thus, also patent ineligible for the reasons discussed above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 51-53, 56-63, 66-70 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tesch et al. (US 20140161356) in view of HAN et al. (US 20190379942) and in further view of GU et al. (US 20170052946) and alternatively or additionally in view of CHEN et al. (US 20180255009). Regarding claim 51, Tesch teaches a method comprising: receiving a query comprising a query emoji ([0059] “a smiley face identified in a text message”, [0067], [0079]-[0080], [0082] “emoticons and/or acronyms can be received … as a query term or the like”, [0102], [0138]); accessing metadata for a video, the metadata comprising a set of emojis generated based at least in part on reactions to the video received from a plurality of user devices in communication with a social platform ([0071] “components described herein can be presented on a social network”, [0074] “emoticons and/or acronyms associated with the set of media content portions”, [0108] “metadata associated with the media content and/or characteristics or features of the videos/images/audio content being analyzed”, [0101]), the metadata comprising a set of emojis based on reactions to the video ([0057] “matching of audio content of the media content with words that are represented by the acronym or the matching of an action, an expression, or audio content with an image or an emotion represented by the emoticon”; [0058] “media splicing … receives the identified emoticons and/or acronyms from the image analysis”; [0059] “portions of media content or media content portions include segments of video clips and/or images that express the emoticon and/or acronym”, [0083], [0178]), wherein each emoji of the set of emojis has a corresponding classification for the video based in part on instances of the respective emoji in the reactions to the video ([0152] “media content corresponding to the phrases, words, and/or images that meet a set of classification criteria, such as for popular videos”, [0153], [0155]-[0156], [0171] “criteria can include a vague extraction, an estimated extraction or, in other words, an imprecise extraction”); selecting, from the set of emojis of the metadata, a metadata emoji based at least in part on determining the corresponding classification for the video meets or exceeds a first score threshold; determining a second matching score between the query emoji and the metadata emoji ([0068] “generate different associations among an acronym and/or an emoticon with an image of media content”, [0080], [0083], [0090], [0092])(see NOTE I), wherein the metadata emoji is different from the query emoji ([0060] as in for a “smiley face emoticon (:)) and/or LOL acronym … returns media content portions having a smiley face made by a vampire, werewolf, jack-o-lantern, ghost, or any other hallowed like theme with images, videos segments, or sounds having the Halloween theme and that also correspond to the emoticon a smiley face”, “a smiley face or LOL received … return a vampire smiling or laughing out loud from scenes of the movie "Salem's Lot" … many different classifications”, [0061]) (see NOTE) determining that the second matching score of the query emoji and the metadata emoji meets or exceeds a second score threshold ([0073], [0079] “a match of the image … with the identified word/phrase/image of the emoticon and/or acronym can determine what portions are extracted from the media content”, [0080], [0126], wherein fuzzy logic allows for degrees of truth between 0 and 1, with the threshold acting as a point on this scale, also see [0131] “matches or matching criteria of the predetermined criteria can be weighted”); based at least in part on (i) the classification for the metadata emoji corresponding to the video (ii) determining that the second matching score of the query emoji and the metadata emoji meets or exceeds the second score threshold ([0073], [0079], [0080], [0126], wherein fuzzy logic allows for degrees of truth between 0 and 1, with the threshold acting as a point on this scale, also see [0131] “matches or matching criteria of the predetermined criteria can be weighted”), determining that the query emoji matches the video ([0073], [0079] “a match of the image … with the identified word/phrase/image of the emoticon and/or acronym can determine what portions are extracted from the media content”, [0080], [0126], wherein fuzzy logic allows for degrees of truth between 0 and 1, with the threshold acting as a point on this scale, also see [0131] “matches or matching criteria of the predetermined criteria can be weighted”, C1 “extract a set of media content portions from media content that correspond to the emoticon or the acronym”); and in response to determining that the query emoji matches the video, providing for display a representation of the video as a query result ([0050] “generate any number of portions of a movie, film or other video, audio content, photos or the like as candidate to place within the multimedia message for the portion of the multimedia message that corresponds to or is expressed by the emoticon received”, [0059]-[0060], [0074]-[0075] “user, for example, could prefer a scene from a movie (e.g., Rocky) to represent an emoticon and/or acronym, rather than a segment of a home video”, [0098], [0126], [0148], [0158] “generate a sequence of media content portions that correspond to words, phrases or images of the inputs”). ◊ Tesch does not explicitly teach, however, HAN discloses – wherein each emoji of the set of emojis has a corresponding first matching score ([0033], [0038]) for the video based in part on instances of the respective emoji in the reactions to the video (Table 1, [0038]-[0039], [0041]) and based at least in part on (i) the first matching score for the metadata emoji corresponding to the video meeting or exceeding the first score threshold ([0041], [0055], [0075], [0079], [0087] where “icon occurrence density is greatest are selected” is a threshold).” NOTE - HAN further discloses the limitation - “wherein the metadata emoji is different from the query emoji” (see [0128] where “emojis “V5” and “thumbs up”, the text features and the emoji category corresponding to “compliment”, [0148] “correspondence relationships between emotion labels and various emojis on each theme may be established”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Tesch to include matching score as disclosed by HAN. Doing so would allow the user to conveniently watch exciting segments in the multimedia resources that can best represent emotional features of the multimedia resources and effectively segment multimedia resources based on a user reaction, facilitating rapid obtaining of a segment of interest by the user and improving the viewing experience of the user (HAN [0057], [0099]). ◊ Tesch does not explicitly teach, however, Gu discloses – “wherein the metadata emoji is different from the query emoji, and wherein the metadata emoji and the query emoji are untranslated” ([0075], [0078] wherein the evaluation is based on picture content and not based on text, thus, it is reasonable and obvious to conclude that the emoji is not translated during matching, [0080] “pictures may also be matched”, [0111], [0114], [0188]-[0189]). NOTE - Gu further discloses the limitation - “wherein the metadata emoji is different from the query emoji”([0033], [0038], [0128] where “emojis “V5” and “thumbs up”, the text features and the emoji category corresponding to “compliment”, [0148] “correspondence relationships between emotion labels and various emojis on each theme may be established”). NOTE I Tesch teaches – “text based message having words or phrases that are matched with the words or phrases correlated to or identified with the media content portions” [0092], wherein - “Each word or phrase… can be any tag, label or metadata that identifies the media content portion” [0089], wherein such “tag, label or metadata” include emoticons –“the words or phrases connected with each portion from the set of home videos … for labeling with an emoticon and/or acronym”; “a portion of video may be labeled according to the word or phrase … and also with emoticons and/or acronyms” [0074]. Based on the above, matching words or phrases (which can be label or metadata, such as emoticons and/or acronyms) in user query with the words or phrases (which can be label or metadata, such as emoticons and/or acronyms) in the media content portions, is obviously analogous to matching emoticon and/or acronym with emoticon and/or acronyms in the video content. Tesch further teaches that such matching is based on “a rule based logic, fuzzy logic, probabilistic, statistical reasoning, classifiers, neural networks” [0126], which implicitly provides a thresholds. Therefore, Tesch obviously and implicitly teaches the limitations of - “determining a second matching score between the query emoji and the metadata emoji; determining that the second matching score of the query emoji and the metadata emoji meets or exceeds a second score threshold.” However, to merely obviate such reasoning Gu teaches - “determining a second matching score between the query emoji and the metadata emoji; determining that the second matching score of the query emoji and the metadata emoji meets or exceeds a second score threshold” ([0080], [0084], [0091], [0155]). Gu also teaches “emojis generated based at least in part on reactions to the video received from a plurality of user devices in communication with a social platform” [0029], [0063]. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Tesch to include second matching score of the query emoji and the metadata emoji meets or exceeds a second score threshold as disclosed by GU. Doing so quickly and accurately meet current input needs of users and input efficiency (CU [0012]). ◊ Once again, Tesch teaches – that matching is based on various factors, such as (1) fuzzy matching between emojis in a query and emojis in the media content and (2) such as popularity [0152]-[0153], which at least indicates a frequency of selection of such media content. As stated above, Tesch does not explicitly teach a first threshold (i.e. matching based on “quantity and/or frequency of an emoji associated with a content item”, as disclosed in the applicant’s specification). While Tesch as modified by Han teaches prong (i), Chen additionally or alternatively discloses selecting content based on both prongs– based at least in part on (i) the first matching score for the metadata emoji corresponding to the video meeting or exceeding the first score threshold ([0123]) and (ii) determining that the second matching score of the query emoji and the metadata emoji meets or exceeds the second score threshold ([0053] “emojis that are frequently used”, [0088] “occurrence density for each type of emoji is greatest can be selected”, [0095], [0100], [0105], [0125]). Chen additionally or alternatively discloses - wherein the first matching score is a statistical value ([0033] “with respect to a certain video within a certain period (e.g., a month) can be statistically analyzed”, [0038]) based in part on at least one of a quantity or a frequency of instances of the metadata emoji in the reactions to the video ([0053], [0055]-[0056], [0075]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Tesch to include prong (i) and (ii) as disclosed by CHEN. Doing so increasing an accuracy of the emoji recommendation (CHEN [0125]) and improve interaction between different users and a click rate of multimedia resources (CHEN [0069]). Claim 61 recites substantially the same limitations as claim 1, and is rejected for substantially the same reasons. Regarding claims 52 and 62, Tesch as modified teaches the method and the system, wherein the video comprises a plurality of scenes having corresponding genres, the method further comprising: determining one or more scenes of the plurality of scenes having a genre associated with the query emoji (Tesch [0050], [0178], [0196], HAN [0048], [0084], GU [0052]). Regarding claims 53 and 63, Tesch as modified teaches the method and the system, wherein the set of emojis for the video comprises one or more emojis for each scene of the plurality of scenes (HAN [0048], [0084], [0088]-[0089], Tesch [0059]-[0060] “portions of media content or media content portions include segments of video clips and/or images that express the emoticon and/or acronym … generate any number of portions of a … video … as candidate to place within the multimedia message for the portion of the multimedia message that corresponds to or is expressed by the emoticon received”, [0066] “words or phrases are associated with the image identified and then the media content is searched and spliced for video segments, audio segments, and/or image content portions that represent the words or phrases”, [0083] “determined portions of video corresponding to the emoticon and/or acronym”, [0089], [0148], [0155], [0193]). Regarding claims 56 and 66, Tesch as modified teaches the method and the system, wherein the video is posted at one or more social platforms, the method further comprising identifying the video based on the one or more social platforms (Tesch [0071], [0101], CHEN [0026], GU [0063], HAN [0032]). Regarding claims 57 and 67, Tesch as modified teaches the method and the system, wherein the metadata emoji and the query emoji are untranslated images (GU [0078] wherein the evaluation is based on picture content and not based on text, thus, it is reasonable and obvious to conclude that the emoji is not translated during matching [0111], [0114], [0188]-[0189]). Regarding claims 58 and 68, Tesch as modified teaches the method and the system, wherein the representation of the video comprises a video clip from the video (Tesch [0060], [0075] “user, for example, could prefer a scene from a movie (e.g., Rocky) to represent an emoticon and/or acronym, rather than a segment of a home video”, [0098], [0126], [0158] “generate a sequence of media content portions that correspond to words, phrases or images of the inputs”). Regarding claims 59 and 69, Tesch as modified teaches the method and the system, wherein the representation of the video comprises an icon corresponding to the first emoji (Tesch [0053], [0062], HAN [0032], [0039]). Regarding claims 60 and 70, Tesch as modified teaches the method and the system, wherein the representation of the video is displayed at a remote device (Tesch [0118], [0120], [0222], HAN [0140]). Claim(s) 54, 64 is/are rejected under 35 U.S.C. 103 as being unpatentable over Tesch as modified and in further view of ALEY et al. (US 2016/0292148). Regarding claims 54 and 64, Tesch as modified teaches the method and the system, wherein the query comprises a text portion, the method further comprising determining, based on the text portion, a textual match Tesch does not explicitly teach, however ALEY discloses match score ([0130]-[0131], [0082]-[0084], [0086], [0105]). It would have been obvious to one of ordinary skill in the art at the time of invention to modify the teachings of Tesch to include match score as disclosed by ALEY. Doing so provides a significant performance improvement (ALEY [0095]). Response to Arguments Applicant's arguments filed 01/12/2026 have been fully considered but they are not persuasive. With respect to the 35 USC 101 rejection, it is noted that the present amendments do not particularly add any additional functionality and for the most part simply rearranging the claim language and the limitations. The present amendment are further directed to additional mathematical calculations and logical judgment. While, the applicant argues that present claims “are not merely abstract mathematical calculations or "mental processes"; they are the specific "logical structures and processes" that constitute the improvement. The present claims improve, e.g., video information retrieval by enabling accurate content discovery based on aggregated user sentiment (reactions) rather than traditional keywords or individual tagging,” it is noted that such functionality is fully applicable to a mental process of selecting VHS (tapes) videos comprising corresponding emoticons (smiley faces drawn on the tapes). Such process of video selections can be a mental process. The user is interacting with the computer in a conventional manner. The proposed technical effect is not causing the computer in of itself to operate different as the method is operating at the level of application and thus is not interacting with the hardware at a level beyond that which any computer program would do so. Similarly, the computer upon which the program is operating is not operating more efficiently or effectively. The computer itself is operating entirely conventionally and the contribution is not having an effect on the efficiency of the computer itself. Considering the elements of the claim both individually and as “an ordered combination” the functions performed by the computer system at each step of the process are purely conventional. Each step of the claimed method does no more than require a generic computer to perform a generic computer function. Thus, the claimed elements have not been shown to integrate the judicial exception into a practical application as set forth in the Revised Guidance which references MPEP §§ 2106.04(d) and 2106.05(a)-(c) and (e)-(h). Here, the claimed generic computer components which are used to implement the method. Here, the claim has not been shown to be “significantly more” than the abstract idea. Therefore, the rejection is maintained. With respect to the 35 USC 103 rejection, the applicant argues – “proposed combination of Tesch, Han, Gu, and Chen lacks” the teachings of "selecting, from the set of emojis of the metadata, a metadata emoji based at least in part on determining the corresponding first matching score for the video meets or exceeds a first score threshold" “because they provide no mechanism for measuring or thresholding the strength of emoji reactions associated with a video. Furthermore, none of the referenced art, taken alone or in combination, suggests or renders obvious selecting a metadata emoji from a set of emojis nor determining a corresponding first matching score for each emoji in the set of emojis.” The arguments are not persuasive. Indeed, Tesch does not explicitly disclose such limitations. However, both references of HAN and Chen fully disclose the limitation of "selecting, from the set of emojis of the metadata, a metadata emoji based at least in part on determining the corresponding first matching score for the video meets or exceeds a first score threshold" as indicated in the rejection above. The applicant makes blank statement without any valid arguments. The applicant further argues – “None of the proposed references teach the comparison of untranslated emojis.” The arguments are not persuasive. The applicant does not define what the untranslated emoji actually is – a picture or a text. I.e. [AltContent: ] and “:)” are both emojis. Gu fully teaches taking an original emoji (untranslated) and matching the original (and thus, untranslated) emojis, which reads on the broad limitations of the claims. Still, Gu clearly teaches such comparison in [0075] “a symbol matching rule and a picture content evaluation rule may be used to extract the second emoji and the text content relevant to the second emoji”, [0080] “matching may be a one-to-one matching, and may also be a fuzzy match (that is, pictures may also be matched when the similarity is above certain threshold).” Matching symbols of the emojis sure corresponds to untranslated emoji content. The applicant once again argues the combination of references. However, the obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, all of the reference are in the same filed of endeavor and solve a similar problem of providing recommendation based on emoticons metadata. The motivation used to combine the references is from the references themselves and is not improper. Applicant's remaining arguments, in regard to the presently amended claims, are addressed in the updated rejections to the claims above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to POLINA G PEACH whose telephone number is (571)270-7646. The examiner can normally be reached Monday-Friday, 9:30 - 5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /POLINA G PEACH/ Primary Examiner, Art Unit 2165 January 26, 2026
Read full office action

Prosecution Timeline

Aug 24, 2023
Application Filed
Aug 24, 2023
Response after Non-Final Action
Oct 31, 2024
Non-Final Rejection — §101, §103, §DP
Feb 18, 2025
Response Filed
Mar 04, 2025
Final Rejection — §101, §103, §DP
Aug 11, 2025
Request for Continued Examination
Aug 20, 2025
Response after Non-Final Action
Oct 08, 2025
Non-Final Rejection — §101, §103, §DP
Jan 12, 2026
Response Filed
Jan 27, 2026
Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596921
Stochastic Bitstream Generation with In-Situ Function Mapping
2y 5m to grant Granted Apr 07, 2026
Patent 12585998
DETERMINING QUALITY OF MACHINE LEARNING MODEL OUTPUT
2y 5m to grant Granted Mar 24, 2026
Patent 12585632
METHOD, DEVICE, AND MEDIUM FOR MANAGING ACTIVITY DATA WITHIN AN APPLICATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579191
IDENTIFYING SEARCH RESULTS IN A HISTORY REPOSITORY
2y 5m to grant Granted Mar 17, 2026
Patent 12572575
USING LARGE LANGUAGE MODELS TO GENERATE SEARCH QUERY ANSWERS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
73%
With Interview (+23.2%)
3y 7m
Median Time to Grant
High
PTA Risk
Based on 461 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month