Prosecution Insights
Last updated: April 19, 2026
Application No. 18/683,171

MAPPING MICRO-VIDEO HASHTAGS TO CONTENT CATEGORIES

Non-Final OA §101§103
Filed
Feb 12, 2024
Examiner
PADUA, NICO LAUREN
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
EBAY INC.
OA Round
2 (Non-Final)
10%
Grant Probability
At Risk
2-3
OA Rounds
3y 3m
To Grant
27%
With Interview

Examiner Intelligence

Grants only 10% of cases
10%
Career Allow Rate
3 granted / 31 resolved
-42.3% vs TC avg
Strong +17% interview lift
Without
With
+17.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
51 currently pending
Career history
82
Total Applications
across all art units

Statute-Specific Performance

§101
40.0%
+0.0% vs TC avg
§103
30.8%
-9.2% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 31 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This is a 2nd non-final rejection in response to amendments/remarks filed on 11/14/2025. Claims 1-4, 8-11, and 15-19 have been amended herein. Claims 1-20 remain pending and are examined herein. Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed, granting priority to PCT/CN2021/112485 filed on 08/13/2021. Claim Rejections – 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter for being directed to signals per se. Claim 15 recites “one or more computer storage media,” without restricting the media to its non-transitory forms, neither in the claim language or in the specification in at least paragraph [0079]. The amendments which now recite “One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations...” However, these amendments still fail to overcome the rejection because the claim itself is still directed to the “computer storage media” and not the computing devices itself. The rejection may be overcome by amending the claim such that it recites only “one or more non-transitory computer storage media” without adding new matter. Therefore, claims 15 and its dependent claims 16-20 are rejected under 101 for being directed to signals per se. For purposes of compact prosecution, claims 15-20 are reanalyzed under the full 2 step process, as if they passed step 1. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Is the claim to a Process, Machine, Manufacture, or Composition of Matter? Claims 1-7: A computer implemented method for mapping micro-video hashtags to content categories, the method comprising: Claims 8-14: A system for mapping micro-video hashtags to content categories, the system comprising: one or more processors; and one or more memory devices in communication with the one or more processors, the memory devices having computer-readable instructions stored thereupon that, when executed by the processors, cause the processors to execute a method for mapping micro-video hashtags to content categories, the method comprising: Claims 15-20: One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations Claims 1-7 recite a computer-implemented method which falls under “process.” Claims 8-14 recite a system with processors and memory devices which is an apparatus claims and falls under at least “machine or manufacture.” Claim 15-20 is a computer-readable storage media which falls under at least “manufacture.” Therefore, all of the claims fall under at least one potentially eligible subject matter category and are to be further analyzed under step 2. Step 2a Prong 1: Is the claim directed to a Judicial Exception(A Law of Nature, a Natural Phenomenon (Product of Nature), or An Abstract Idea?) The claims under the broadest reasonable interpretation in light of the specification are analyzed herein. Representative claims 1, 8, and 15 are marked up, isolating the abstract idea from additional elements, wherein the abstract idea is in bold and the additional elements have been italicized as follows: Claim 1 Preamble: A computer implemented method for mapping micro-video hashtags to content categories, the method comprising: Claim 8 Preamble: A system for mapping micro-video hashtags to content categories, the system comprising: one or more processors; and one or more memory devices in communication with the one or more processors, the memory devices having computer-readable instructions stored thereupon that, when executed by the processors, cause the processors to execute a method for mapping micro-video hashtags to content categories, the method comprising: Claim 15 Preamble: One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising: Claims 1, 8, 15 Body: - collecting content categories from a content service; - collecting micro-video, hashtags and user interaction semantic data from one or more micro-video services; - training a graph convolution network with the content categories and the micro-video, hashtags, and user interaction semantic data to generate a multi-layer graph convolution network configured to provide the hashtags correlated with the content category to the content service, -wherein the multi-layer graph convolution network comprises a concatenation layer for processing the micro-video, hashtags, and user interaction semantic data to determine a correlation of at least one content category to the micro-video, hashtags, and user semantic data. When evaluating the bolded limitations of the claims under the broadest reasonable interpretation in light of the specification, it is clear that representative claims 1, 8, and 15 recite an abstract idea within the category of “certain methods of organizing human activity.” More specifically, the present invention falls under the sub-grouping “managing personal behavior or relationships or interactions between people” include social activities, teaching, and following rules or instructions as outlined in MPEP 2106.04(a)(2)(II)(C). In this case, the instant claims in bold recite steps of collecting content categories from a content service, collecting data from one or more micro-video services, determining a correlation of a content category to the data, and providing hashtags correlated with the category to the content service. These steps are merely data collection, data processing, and data output steps in order to perform the abstract idea of managing personal behavior or relationships or interactions between people on a content service. In view of the amended limitations, the limitations now recite using a particular type of model/algorithm to determine the correlation and provide the hashtags. However, this model is recited with such generality that it is no more than a black box recited with the intended inputs and outputs. Since there are no specific steps on how the training is performed, how the hashtags are determined, or how the correlation is determined other than that the model comprises a “concatenation layer,” these functions are still part of the abstract idea of “certain methods of organizing human activity” as they are no more than a set of steps, or instructions to “manage interactions between individuals.” Displaying hashtags on a content service (like social media) is merely a display of data as a way to manage interactions on the social media(especially since user interactions are part of the equation). Furthermore, the claims also fall within the category of commercial or legal interactions as outlined in MPEP 2106.04(a)(2)(II)(B), which include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations. Providing hashtag related to the content category to is an advertising/marketing behavior as it aims to the draw the user to content deemed relevant based on the user interactions/categories. This is supported in at least the specification paragraph [0030], “[0030] The resulting correlation model can be used to produce content recommendations based on a hashtag associated with a micro-video segment that a user is viewing, e.g. product recommendations can be presented while a user is viewing a micro-video. In one particular example, tag-level popularity can be utilized to recommend relevant hashtags for a content service, e.g. an information platform such as a news feed or educational platform, a social network platform, or an eCommerce platform.” Therefore, the claims also recite commercial or legal interactions such as recommending relevant hashtags for an eCommerce platform, or product recommendations while a user is viewing a relevant video. Therefore, the claims recite an abstract idea within the category of “certain methods of organizing human activity” and are to be further analyzed under Step 2a Prong 2 and Step 2B. Step 2A Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Claims 1, 8 and 15 recite the following additional elements: -A computer implemented method in claim 1 -system comprising: one or more processors; and one or more memory devices in communication with the one or more processors, the memory devices having computer-readable instructions stored thereupon that, when executed by the processors, cause the processors to execute a method in claim 8 - One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations in claim 15 - training a graph convolution network to generate a multi-layer graph convolution network; in claims 1, 8, 15 The additional elements listed above, when considered individually and in combination with the claim as a whole, no more than a recitation of the words “apply it” (or an equivalent) or mere instructions to implement an abstract idea or other exception on generic computing components as outlined in MPEP 2106.05(f). In this case, the abstract idea of “collecting content categories from a content service, collecting data from one or more micro-video services, determining a correlation of a content category to the data, and providing hashtags correlated with the category to the content service” is merely instructed to be performed on generic computing devices such as computers, processors, memory devices, and computer storage media. It is evident in at least specification [0041] that these devices are generic since [0041] states, “[0041] Examples of user/client applications 110 can include user client devices, such as mobile smartphone devices or personal computers, or applications executing on user client devices, such as browsers or micro-video applications or communication applications.” Therefore, the devices are generic and do not improve computer technology, which is one of the considerations found in MPEP 2106.05(a) for integration into a practical application. Furthermore, the fact that the model being trained is “a multi-layer graph convolution network” is also an “apply it” level element since it merely invokes the use of computers to perform the abstract idea, by using a computer performing “multi-layer graph convolution” as a tool to determine a correlation. Since it is recited so generally, such that it is no more than a black box recited with the intended inputs and outputs, it is also no more than a general link to a particular technological environment or field of use as outlined in MPEP 2106.05(h). Reciting that the functions are performed on a “multi-layer graph convolution network” does meaningfully limit the claims because it does not recite enough specificity on how the intended outputs are derived, even considering the use of a concatenation layer, which is merely reciting that the data is to be merged together. Therefore, in this case, using multi-layer graph convolution network to perform determining a correlation is merely a general link to the field of neural networks. Therefore, even when considering the additional elements individually, or as an ordered combination, nothing in the claims integrates the abstract idea into a practical application. Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Claims 1, 8 and 15 recite the following additional elements: -A computer implemented method in claim 1 -system comprising: one or more processors; and one or more memory devices in communication with the one or more processors, the memory devices having computer-readable instructions stored thereupon that, when executed by the processors, cause the processors to execute a method in claim 8 - One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations in claim 15 - training a graph convolution network to generate a multi-layer graph convolution network; in claims 1, 8, 15 The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because, when considered separately and as an ordered combination, they do not add significantly more (also known as an “inventive concept”) to the exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computing devices such as computers, processors, memory devices, and computer storage media to perform the abstract idea of “collecting content categories from a content service, collecting data from one or more micro-video services, determining a correlation of a content category to the data, and providing hashtags correlated with the category to the content service”, amounts to no more than mere instructions to apply the exception using generic computing components. (See MPEP 2106.05(f)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. Similarly, generally linking the abstract idea to an online technological environment does not meaningfully limit the claim to provide an inventive concept according to MPEP 2106.05(h). Accordingly, even when viewed as a whole, nothing in the claim adds significantly more (i.e. an inventive concept) to the abstract idea. Thus claims 1, 8, and 15 are not patent eligible because the claims are directed to an abstract without significantly more. Dependent claims 2-7, 9-14, and 16-20 are also given the full two part analysis, with the additional elements being considered individually and in an ordered combination as a whole, resulting in the following determinations. Claims 2, 9, and 16 further define the abstract idea by adding the additional steps of determining popularity levels for the hashtag, determining a ranking of the hashtags based on popularity levels and relevance, and providing the ranking. This is more of the same abstract idea described in the representative claims because the claims merely describe further data processing steps such as ranking in order to manage user interactions. Furthermore, ranking the hashtags based on popularity and relevance, is also more of the same abstract idea since it is used for advertising and marketing the hashtags to a person. Therefore, the dependent claims, whether considered individually or as a whole, in an ordered combination, also recite an abstract idea. There are no further additional elements to considered, therefore the claims are directed to an abstract idea without integration into a practical application or significantly more. Claims 3, 10 and 17 further define the abstract idea by adding processing with a concatenation layer, processing data output from the concatenation layer into the fully connected layer, calculating similarity scores representative of the correlation of hashtags to the content category. The processing of the data to result in the calculation of similarity scores is more of the same abstract idea because it is still used to perform the same abstract idea of “managing personal behavior, and commercial or legal interactions.” The additional elements of processing with a concatenation layer, processing data output from the concatenation layer into the fully connected layer, is still an example of generally linking the abstract idea into technical field of use. Such steps of using a concatenation layer and processing that data into a fully connected layer is still reciting generic neural network functions generally linked to the abstract idea without meaningfully limiting its use. Even when considering the amended claims in combination, the claims still generally link the function of calculating similarity scores to the fields of neural networks, generally such that it does not meaningfully limit its use on the abstract idea. Furthermore, the way in which a fully connected layer is used, does not provide an improvement to the field of neural networks as is considered in MPEP 2106.05(a). Therefore, the claims are directed to an abstract idea with additional elements that do not provide an integration into a practical application or significantly more. Claims 4, 11, and 18 further limit the abstract idea because the additional steps merely call for the display of the similarity scores of the hashtags, which is still more of the same abstract idea but with an additional data output step. There are no further additional elements to consider, therefore, the claims are directed to an abstract idea without integration into a practical application or significantly more. Claims 5, 12, and 19 further define the abstract idea by adding the additional steps of identifying a content category based on a received hashtag, identifying content from the correlated category, and providing the identified content. Whether considered individually, or in an ordered combination, this is more of the same abstract idea since it merely perform the same steps as the representative claims, however, it outputs a piece of content as opposed to a hashtag. Outputting recommended content is still part of managing personal behavior or interactions between people and commercial or legal interactions such as advertising or marketing. There are no further additional elements to consider, therefore, the claims are directed to an abstract idea without integration into a practical application or significantly more. Claims 6, 7, 13, 14 and 20 further limit the abstract idea by defining that the content category comprises an information category, content data comprises one or more information items, the content service platform comprises an eCommerce platform, the content category comprises a product category, and content data comprises product information. Even when substituting these limitations into the claims they depend on, the claims still recite the same abstract idea, since performing the abstract idea functions on an ecommerce platform further validates its categorization under commercial or legal interactions and performing the functions under a social media or information platform validates its categorization under management of personal behavior or interactions. There are no further additional elements to consider, therefore, the claims are directed to an abstract idea without integration into a practical application or significantly more. Claim Rejections – 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Cuan et al. (US 11120093 B1) hereinafter Cuan, in view of Yinwei Wei et al. (NPL, October 25 2019, Personalized Hashtag Recommendation for Micro-video in Session 3C: Smart Applications from the proceedings of the 27th ACM Interactional Conference on Multimedia, see attached NPL) hereinafter Wei. Regarding Claims 1, 8, 15: Cuan discloses embodiments for providing content based on computer vision processing, and providing content based on features of an object extracted from one or more images and descriptors associated with the features. Cuan teaches: Claim 1 Preamble: A computer implemented method for mapping micro-video hashtags to content categories, the method comprising:(Cuan [Col. 3 Lines 47-64] In some embodiments, a user may capture one or more images via their client device. The images may be stored in memory of the client device, extracted from a video, or may be part of a live image or video stream... As described herein, a live image stream refers to a continuous set of images captured by an image capture component, and may encompass live video as well. Thus, the term “live image stream” should not be construed to be limited to only images as videos may also be obtained via a same or similar image capture functionality.[Col. 21 Lines 49-61] In some embodiments, the methods may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information)...The processing devices may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the operations of the methods.) Cuan’s live image stream is synonymous with the present application’s limitation “micro-video”, because they both describe short form videos such as “live images.” Claim 8 Preamble: A system for mapping micro-video hashtags to content categories, the system comprising: one or more processors; and one or more memory devices in communication with the one or more processors, the memory devices having computer-readable instructions stored thereupon that, when executed by the processors, cause the processors to execute a method for mapping micro-video hashtags to content categories, the method comprising: (Cuan [Col. 2 Lines 42-61] FIG. 1 shows a system 100 for providing content based on computer vision processing of one or more images, in accordance with one or more embodiments. As shown in FIG. 1, system 100 may include computer system 102, client device 104 (or client devices 104a-104n), or other components... By way of example, client device 104 may include a desktop computer, a notebook computer, a tablet computer, a smartphone, a wearable device, or other client device. Users may, for instance, utilize one or more client devices 104 to interact with one another, one or more servers, or other components of system 100. An example computing system which may be implemented on or by client device 104 is described in greater detail below with respect to FIG. 10.) Claim 15 Preamble: One or more computer storage media storing computer-usable instructions that, when used by one or more computing devices, cause the one or more computing devices to perform operations, the operations comprising: (Cuan [Col. 30 Line 44- Col. 31 Line 3] System memory 1020 may include a tangible program carrier having program instructions stored thereon. A tangible program carrier may include a non-transitory computer readable storage medium. A non-transitory computer readable storage medium may include a machine-readable storage device, a machine-readable storage substrate, a memory device, or any combination thereof. Non-transitory computer readable storage medium may include non-volatile memory ... System memory 1020 may include a non-transitory computer readable storage medium that may have program instructions stored thereon that are executable by a computer processor (e.g., one or more of processors 1010A-1010N) to cause the subject matter and the functional operations described herein.) Claims 1, 8, 15 Body: -collecting content categories from a content service; (Cuan [Col. 15 Lines 4-40] Column 506 may include descriptors determined to be related to a feature set, and object label, and/or an image. In some embodiments, the object label may be used as an input to determine whether any similar descriptors exist on a communications network. For example, a determination may be made as to whether any similar hashtags exist on a social media network. The similarity between a descriptor and an object label may be determined and if the similarity satisfies a similarity condition (e.g., exceeds or is equal to a similarity threshold), then the object label may be classified as being the same or similar to the descriptor (or descriptors if more than one). For example, the object label “Object_0” may be determined to be related to a first group of descriptors, such as hashtags #bandmusic and #music; [Col. 23 Lines 42-51] In some embodiments, the object determined to depicted within the live image stream may correspond to an image from a training data set. The image from the training data set, which may also depict the image, may include an object label (e.g., an object name, an object type, etc.) or other description information associated with the object. Some embodiments include determining related descriptors based on the object label, such as by performing a descriptor search on a communications network for descriptors using the object label as an input query. [Col. 23 Line 63- Col. 24 Line 1] In some embodiments, descriptors related to the object may be determined by extracting an object name, object type, or other characteristics of the object, from the URL of the webpage related to the object. For example, an object name, such as a product name, may be extracted from the URL.) Content categories has the broadest reasonable interpretation of any classification/grouping of content. Therefore, groups of descriptors, and object labels/types are mapped to content categories. The content service is the social media network or communications network. -collecting micro-video, Cuan [Col. 3 Lines 51-64] For example, a mobile application for a communications network (e.g., a social media network) may include an image capture functionality whereby a user may capture an image or video and directly upload that image or video to the user’s account on the communications network. As described herein, a live image stream refers to a continuous set of images captured by an image capture component, and may encompass live video as well. Thus, the term “live image stream” should not be construed to be limited to only images as videos may also be obtained via a same or similar image capture functionality. Furthermore, although some embodiments refer to a live image stream being captured, previously captured images or videos may also be obtained.) Live image streams are mapped to micro-video data, and the social media network in which the image was taken is mapped to micro video services. -hashtags and (Cuan [Col. 12 Line 64- Col. 13 Line 7] Column 408 may include different descriptors searched by a corresponding account identifier. In some embodiments, a user corresponding to an account ID may input a descriptor (e.g., some or all a hashtag), into a query input field of the communications network (e.g., via the communications network’s mobile application), and content items, messages, accounts, or other relevant data may be retrieved that is related to the input hashtag. Alternatively, or additionally, a user may select a descriptor associated with a content item (e.g., a hashtag displayed in association with a post on the communications network).) Cuan also uses “hashtags,” which are used interchangeably with the term “descriptor.” -user interaction data from one or more micro-video services; (Cuan [Col. 12 Lines 8-14]The weight of an edge may change over time based on interactions of the connecting users. Account connection subsystem 118 may be configured to monitor all interactions between the user accounts of the communications network, and may update the weights based on the interactions. For example, two users that frequently send content items to one another may have their weight increase over time. [Col. 27 Lines 53-63] For example, in response to detecting a user has interacted with that AR visualization 602, which includes a dynamic link to a particular user account (e.g., @band_music_information) on the communications network, the communications network may direct the UI to display content related to that user account. In some embodiments, each detected interaction with the AR visualization may be logged and stored by content database 134 along with an account ID of the user account on the communications network with which such an interaction was detected.) Cuan takes user interaction data from the micro-video services, whether that is interactions with other users in [Col. 12] or interactions with the system [Col. 27] - using a multi-layer graph convolution network;(Cuan [Col. 8 Lines 4-36] In some embodiments, the feature extraction process may be performed using a deep learning process. For example, a deep CNN(convolutional neural network), trained on a large set of training data (e.g., the AlexNet architecture, which includes 5 convolutional layers and 3 fully connected layers, trained using the ImageNet dataset) may be used to extract features from an image. In some embodiments, a pre-trained machine learning model may be obtained and used for performing feature extraction for images. In some embodiments, a support vector machine (SVM) may be trained with a training data to obtain a trained object recognition model for performing feature extraction. In some embodiments, a classifier may be trained using extracted features from an earlier layer of the machine learning model... In some embodiments, the input images, the features extracted from each of the input images, an identifier labeling each of the input image, or any other aspect capable of being used to describe each input image, or a combination thereof, may be stored in memory, such as via feature database 140. In some embodiments, a feature vector describing visual features extracted from each image, a context of the image, and an object or objects determined to be depicted by the image, and the like, may be stored by feature database 140.) - provide the hashtags correlated with the content category to the content service, (Cuan [Col. 23 Lines 51-54] A most popular, most relevant, or most similar (e.g., with respect to the object label) N-descriptors may be selected to be the descriptors hashtags related to the object. [Col. 15 Lines 4-40] Column 506 may include descriptors determined to be related to a feature set, and object label, and/or an image. In some embodiments, the object label may be used as an input to determine whether any similar descriptors exist on a communications network. For example, a determination may be made as to whether any similar hashtags exist on a social media network... The related descriptor or descriptors for each object label may be stored in column 506. For example, the object label “Object_0” may be determined to be related to a first group of descriptors, such as hashtags #bandmusic and #music; the object label “Object_1” may be determined to be related to a second group of descriptors, such as hashtag #sportsteam; the object label “Object_2” may be determined to be related to a third group of descriptors, such as hashtag #celebrityname; and the object label “Object_P” may be determined to be related to a fourth group of descriptors, such as hashtag #news.) In these citations, descriptors are identified and able to be selected within the social media network. -to determine a correlation of at least one content category to the micro-video, hashtags. (Cuan [Col. 23 Line 41 – Col. 24 Line 13] In an operation 810, a descriptor related to the object may be determined. In some embodiments, the object determined to depicted within the live image stream may correspond to an image from a training data set. The image from the training data set, which may also depict the image, may include an object label (e.g., an object name, an object type, etc.) or other description information associated with the object. Some embodiments include determining related descriptors based on the object label, such as by performing a descriptor search on a communications network for descriptors using the object label as an input query. A most popular, most relevant, or most similar (e.g., with respect to the object label) N-descriptors may be selected to be the descriptors hashtags related to the object. Similarity between a descriptor and an object may be computed by calculating a similarity score between a character string of the descriptor and a character string of the object label. For example, a Word2Vec model may be used to compute a distance between a feature vector of the character string of a hashtag and a feature vector of the character string of the object label. A distance that satisfies a distance threshold (e.g., being less than or equal to a predetermined distance) may be classified as being similar. Descriptors related to the extracted object name may be determined by performing a descriptor search on the communications network, where one or more descriptors related to the object name may be obtained. A similarity between each of the descriptors and the object name to determine a top N most relevant, popular, and/or similar descriptors, for example using a Word2Vec model as described above. In some embodiments, operation 810 may be performed by a subsystem that is the same or similar to object recognition subsystem 114, descriptor subsystem 120, or a combination of both object recognition subsystem 114 and descriptor subsystem 120.) Finding the similarity between object labels(which include both the descriptor subsystem and object recognition subsystem), is an example of determining a correlation of at least one content category... However, Cuan does not teach: - training a graph convolution network with the content categories and the micro-video, hashtags, and user interaction semantic data to generate a multi-layer graph convolution network configured to provide the hashtags correlated with the content category to the content service, (Cuan teaches user interaction data but not “user interaction semantic data”. Furthermore, Cuan teaches training a model using content categories, micro-video and hashtags, but not the user interaction semantic data.) -wherein the multi-layer graph convolution network comprises a concatenation layer for processing the micro-video, hashtags, and user interaction semantic data to determine a correlation of at least one content category to the micro-video, hashtags, and user semantic data. (Cuan teaches multi-layer graph convolution networks for processing the data to determine a correlation between the content category to the micro-video and hashtags, but does not teach that this is done using a concatenation layer, but does not specifically recite that any of the layers is a “concatenation layer.”) Alternatively, Wei discloses personalized hashtag recommendation systems to suggest user hashtags to annotate, categorize, and describe their posts, whilst considering the post content, user preferences, using a Graph Convolution Network based Personalized Hashtag Recommendation (GCN-PHR) model. Wei teaches: - training a graph convolution network with the content categories and the micro-video, hashtags, and user interaction semantic data to generate a multi-layer graph convolution network configured to provide the hashtags correlated with the content category to the content service,(Wei [Page 1446 Abstract] In this paper, towards the personalized micro-video hashtag recommendation, we propose a Graph Convolution Network based Personalized Hashtag Recommendation (GCN-PHR) model, which leverages recently advanced GCN techniques to model the complicate interactions among <users, hashtags, micro-videos> and learn their representations. In our model, the users, hashtags, and micro-videos are three types of nodes in a graph and they are linked based on their direct associations. See [3.3 Pairwise-based Learning] for a learning method for optimization which consists of training the convolution network with the representation of textual, visual and acoustic features, as well as the micro-video, hashtags and user data. See [4.1 Experimental Settings] which shows that the features from textual, visual, and acoustic modalities are extracted from the videos. The “features” are mapped to “content categories” since features are just characteristics of the content. Wei [4.4 Visualization] ‘In the last example, our presented model recommends the third micro-video with ‘#beach’, ‘#ocean’, and ‘#beauty’. Different from other hashtags, ‘#beauty’ does not have a specific visual appearance. And different users have their own tastes on ‘beauty’. However, our model can still correctly recommend this hashtag, this further validates the powerful capability of our model on learning userspecific hashtag semantic and matching it with the user-interested micro-video contents. See [Figure. 3 Visualization of hashtag recommended on the Instagram dataset] for providing hashtags to the content service. See [Fig. 2 Schematic illustration of our proposed graph-based convolutional network]to see that the resulting network is “multilayer”) In Wei, it is clear that “user” data is “user interaction semantic data” based on the citations above. -wherein the multi-layer graph convolution network comprises a concatenation layer for processing the micro-video, hashtags, and user interaction semantic data to determine a correlation of at least one content category to the micro-video, hashtags, and user semantic data. (Wei [Page 1450 Neural Network-based Fusion] Neural network-based fusion. In this method, u.sub.i.sup.v and u.sub.i.sup.v are first concatenated and then fed into a fully connected layer to obtain the final representation of the user preference. Formally, the user preference is obtained by... [Fig. 2 also shows the concatenation layers] See [3.1 Problem setting and Model Overview] to see that “When an interaction exists between two nodes...there will be an edge...to link the two nodes in the graph.” the recommended hashtags should not only match the micro-video contents but also fit the personal preferences of users. To achieve this goal, we apply the graph convolutional networks to modeling the complex interactions among three types of entities: users, hashtags, and micro-videos. ) In Wei [3.1] edges are an example of determining a correlation of the nodes to the microvideos. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to modify Cuan by adding the teachings of Wei, particularly the training of a multi-layer graph convolution network to correlate content categories to the microvideos, hashtags, and user interaction semantic data. One of ordinary skill in the art would have been motivated by the benefit of using content information in hashtag recommendation to improve the accuracy of the results. (Wei [page 1453] This also demonstrate the benefit of using the content information in hashtag recommendation.) Regarding Claims 2, 9, 16: The combination of Cuan and Wei teach The method of Claim 1/The system of claim 8/The computer storage media of claim 15, Furthermore, Cuan teaches(the body of claim 2 which is also representative of claims 9 and 16): where the method includes: -determining popularity levels for the hashtags; (Cuan [Col. 23 Lines 51-54] A most popular, most relevant, or most similar (e.g., with respect to the object label) N-descriptors may be selected to be the descriptors hashtags related to the object.) -determining the hashtags based on popularity levels and relevance; (Cuan [Col. 24 Lines 5-8] A similarity between each of the descriptors and the object name to determine a top N most relevant, popular, and/or similar descriptors, for example using a Word2Vec model as described above.) -and providing the hashtags correlated with the content category to the content service (Cuan [Col. 23 Lines 51-54] A most popular, most relevant, or most similar (e.g., with respect to the object label) N-descriptors may be selected to be the descriptors hashtags related to the object. [Col. 15 Lines 4-40] Column 506 may include descriptors determined to be related to a feature set, and object label, and/or an image. In some embodiments, the object label may be used as an input to determine whether any similar descriptors exist on a communications network. For example, a determination may be made as to whether any similar hashtags exist on a social media network... The related descriptor or descriptors for each object label may be stored in column 506. For example, the object label “Object_0” may be determined to be related to a first group of descriptors, such as hashtags #bandmusic and #music; the object label “Object_1” may be determined to be related to a second group of descriptors, such as hashtag #sportsteam; the object label “Object_2” may be determined to be related to a third group of descriptors, such as hashtag #celebrityname; and the object label “Object_P” may be determined to be related to a fourth group of descriptors, such as hashtag #news.) However, Cuan fails to teach: - wherein providing the hashtags correlated with the content category to the content service comprises providing the ranking of the hashtags correlated with the content category to the content service. Alternatively Wei teaches: - wherein providing the hashtags correlated with the content category to the content service comprises providing the ranking of the hashtags correlated with the content category to the content service. (Wei [3.2.7 Personalized Hashtag Recommendation page 1450] Given a new microvideo, uploaded by a user, the hashtags in H could be recommended in the descending order of their similarity score with respect to vk based on the user’s preferences. Specifically, the similarity score is computed by the dot product of the user-specific hashtag representation, and the user-specific micro-video representation) Providing the correlated hashtags based on descending order of similarity score is an example of “providing the ranking of the hashtags.” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to modify Cuan by adding the teachings of Wei, particularly the providing of a ranked list of personalized hashtag recommendations in descending order. One of ordinary skill in the art would have been motivated by the benefit of providing the most relevant hashtags closer to the top of the list to attract the attention of the user. (Wei [3.2.2 User preference on microvideos) Notice that a micro-video contains a sequence of video frames with rich information. For a specific micro-video, a user may be only interested in its certain parts. To accurately model the user preference on micro-videos, it is crucial to identify which part in each micro-video attracts the attention of the user.) Regarding Claims 3, 10, 17: The combination of Cuan and Wei teach The method of Claim 1/The system of claim 8/The computer storage media of claim 15, However, Cuan fails to teach(the body of claim 3 which is also representative of claims 10 and 17):: -where: determining a correlation of at least one content category to the micro-video, hashtags, and user interaction semantic data comprises: - processing data output from the concatenation layer with a full connected layer of the multi-layer graph convolution network to produce a user-specific micro-video representation and a user-specific hashtag representation; and -calculating similarity scores for hashtags from content from the content category and a product of [[the]] micro-video semantic features and user-specific hashtags, and - determining the correlation of hashtags to the content category from the similarity scores. However, Wei teaches: -where: determining a correlation of at least one content category to the micro-video, hashtags, and user interaction semantic data comprises: processing data output from the concatenation layer with a full connected layer of the multi-layer graph convolution network to produce a user-specific micro-video representation and a user-specific hashtag representation; and (Wei [Page 1447] Thereafter, our model further learns the user-specific micro-video features and user-specific hashtag semantics with obtained representations of user preference and hashtag semantic, which are then used for personalized hashtag recommendation for micro-videos. [Also See Fig. 3] for the concatenated layer output going into the fully connected layer to acquire the representations. [See 3.2.3 Neural Network-based Fusion on Page 1450] first concatenated and then fed into a fully connected layer to obtain the final representation of the user preference.) -calculating similarity scores for hashtags from content from the content category and a product of [[the]] micro-video semantic features and user-specific hashtags, and (Wei [Page 1448] The learned representations are then used to measure the similarity scores between hashtags and posts [3.2.7 Personalized Hashtag Recommendation page 1450] Specifically, the similarity score is computed by the dot product of the user-specific hashtag representation, and the user-specific micro-video representation) - determining the correlation of hashtags to the content category from the similarity scores.(Wei 5 Conclusion on page 1453] The learned user-specific micro-video representation and user-specific hashtag representation are used to compute the suitability score (or similarity score) of the hashtag with respect to the micro-video for hashtag recommendation.) The suitability falls within the scope of “correlation.” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the present disclosure to modify Cuan by adding the teachings of Wei, particularly the determination of a correlation using the data from a concatenation layer fed into a full connected layer to produce a micro-video representation and a user-specific hashtag representation. One of ordinary skill would have been motivated by the benefit of such a technique being able to correctly recommend hashtags despite different user tastes on a particular category. (Wei Page 1453] In the last example, our presented model recommends the third micro-video with ‘#beach’, ‘#ocean’, and ‘#beauty’. Different from other hashtags, ‘#beauty’ does not have a specific visual appearance. And different users have their own tastes on ‘beauty’. However, our model can still correctly recommend this hashtag, this further validates the powerful capability of our model on learning userspecific hashtag semantic and matching it with the user-interested micro-video contents.) Regarding Claims 4, 11, and 18: The combination of Cuan and Wei teach The method of Claim 3/The system of claim 10/The computer storage media of claim 17, Furthermore, Cuan teaches (the body of claim 4 which is also representative of claims 11 and 18): - where the method includes: providing the hashtags correlated with the content category to the content service, (Cuan [Col. 23 Lines 51-54] A most popular, most relevant, or most similar (e.g., with respect to the object label) N-descriptors may be selected to be the descriptors hashtags related to the object. [Col. 15 Lines 4-40] Column 506 may include descriptors determined to be related to a feature set, and object label, and/or an image. In some embodiments, the object label may be used as an input to determine whether any similar descriptors exist on a communications network. For example, a determination may be made as to whether any similar hashtags exist on a social media network... The related descriptor or descriptors for each object label may be stored in column 506. For example, the object label “Object_0” may be determined to be related to a first group of descriptors, such as hashtags #bandmusic and #music; the object label “Object_1” may be determined to be related to a second group of descriptors, such as hashtag #sportsteam; the object label “Object_2” may be determined to be related to a third group of descriptors, such as hashtag #celebrityname; and the object label “Object_P” may be determined to be related to a fourth group of descriptors, such as hashtag #news.) However, Cuan fails to teach: - wherein providing the hashtags correlated with the content category to the content service includes providing the similarity scores for the hashtags to the content service. Alternative
Read full office action

Prosecution Timeline

Feb 12, 2024
Application Filed
Jul 14, 2025
Non-Final Rejection — §101, §103
Sep 02, 2025
Interview Requested
Oct 15, 2025
Applicant Interview (Telephonic)
Oct 15, 2025
Examiner Interview Summary
Nov 14, 2025
Response Filed
Dec 05, 2025
Non-Final Rejection — §101, §103
Mar 24, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586035
INTERACTIVE USER INTERFACE FOR SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12523701
METHOD FOR MANAGING BATTERY RECORD AND APPARATUS FOR PERFORMING THE METHOD
2y 5m to grant Granted Jan 13, 2026
Patent 11881521
SEMICONDUCTOR DEVICE
2y 5m to grant Granted Jan 23, 2024
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
10%
Grant Probability
27%
With Interview (+17.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 31 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month