Prosecution Insights
Last updated: April 19, 2026
Application No. 18/489,358

System and Method for Determining User Interest Over Time Based on Application Data

Final Rejection §103
Filed
Oct 18, 2023
Examiner
VANDERHORST, MARIA VICTORIA
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Glance Inmobi Pte. Limited
OA Round
2 (Final)
48%
Grant Probability
Moderate
3-4
OA Rounds
3y 9m
To Grant
86%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
280 granted / 579 resolved
-3.6% vs TC avg
Strong +38% interview lift
Without
With
+37.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
28 currently pending
Career history
607
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
13.2%
-26.8% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 579 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Response to Amendment This communication is in response to the amendment filed on 08/11/2025 for the application No. 18/489,358, Claims 1-16 are currently pending and have been examined. Claims 1-16 have been rejected as follow, Examiner’s Note Regarding to 101 compliance, Claims 1-16 are compliant with 101, according with the last "2019 Revised Patent Subject Matter Eligibility Guidance" (2019 PEG), published in the MPEP 2103 through 2106.07(c). Examiner’s analysis is presented below in all the claims: Claim 1: Step 1 of 2019 PGE, does the claim fall within a Statutory Category? Yes. The claim recites a method. Step 2A - Prong 1: Is a Judicial Exception recited in the claim? Yes. The claim recites the limitations of “identifying the user embeddings, by a first embeddings identifier module [[115]], for the at least one user based on at least: a first apps data, wherein the first apps data comprise a first list of apps installed … of the at least one user, the first apps data being determined based on one or more first set of attributes, by an apps data determination module [[120]]; selecting app embeddings, by an app embeddings selection module [[125]], for the apps installed … of the at least one user [[105]], wherein the app embeddings are identified for and representative of each app of a plurality of apps installed … of a plurality of users; and[[,]] performing weighted mean pooling on the selected app embedding vectors, by a pooling module [[130]], for each app installed …of the at least one user [[105]] along with weights, wherein the weights comprise the data associated with a frequency of updating an app installed .. device of the at least one user [[105]]; deriving user interest representation, by the user interest identification module [[145]], based on identified user embeddings for determining user interest of the at least one user [[105]];” The “identifying, selecting, performing, deriving” limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitations as certain methods of organizing human activity, advertising, marketing or sales activities or behaviors. The method for determining user interest of at least one user. Thus, the claim recites an abstract idea. Step 2A - Prong 2: Integrated into a Practical Application? Yes. The claim recites additional limitations, such as, “accessing embeddings data associated with installed application data …, wherein the embeddings data represents installed applications …providing recommendations to the at least one user's user device, by a recommendation module [[150]], based on determined user interest of the at least one user [[105]]”. And other supplementary elements in the claim “user's user device “, “executing code by a computer system to cause the computer system to perform operations”, “representing each of the selected app embeddings as unique vectors;” The claim as a whole integrates the method of organizing human activity into a practical application. Specifically, the additional limitations recite a specific method for determining user interest of at least one user. Also the claim as a whole reflect the improvement described in the background, ““Embeddings” are representations of data in another form. For example, a “word embedding” is a representation of a word. Existing methods obtain app embeddings based on long short-term memory (LSTM). Existing methods define Short-term applications installed window and Long-term applications Install Window. However, LSTM methods have limited ability to capture graph structure, and are computation intensive and have limited interpretability. LSTM models typically require large amounts of training data to achieve good performance. LSTM models are designed to operate on fixed-length sequences and may struggle to adapt to changes in graph structure or edge weights over time. FIG. 1 is a block diagram depicting an exemplary architecture of a proprietary information bypass, user interest and recommendation system 100 for determining a user interests, such as user product and service interests, from application embeddings. FIG. 2 is an exemplary block diagram representing an interest and recommendation method 200 for determining the user interest. In at least one embodiment, the user interest and recommendation system 100 operates in accordance with the interest and recommendation method 200 to determine such user interest based on interactions between the user with the user's device and provides recommendations to the user via the user device based on the determined user interests.”, paragraphs 28-29. Thus, the claim as a whole is eligible because is not directed to the recited judicial exception (abstract idea). Step 2B : claim provides an inventive concept? n/a Claim 12: Step 1 of 2019 PGE, does the claim fall within a Statutory Category? Yes. The claim recites a system. Step 2A - Prong 1: Is a Judicial Exception recited in the claim ? Yes. Because the same reasons pointed above. Step 2A - Prong 2: Integrated into a Practical Application? Yes. Because the same reasons pointed above. The claim is eligible. Dependent claims 2-11, 13-16, the claims recite elements such as “wherein the one or more first set of attributes comprises: a) a timestamp of a time of installation of each application on the first list of applications of the at least one user 1, and b) user device id and app id of each application on the first list of applications of the at least one user ”, etc. These elements integrate the system of organizing human activity into a practical application. The claims are eligible. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-5 and 9-15, are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. 11461634 (Bhatia) in view of US PG. PUB. No. 20160299977 (HREHA). As to claims 1 and 12, Bhatia discloses A method for bypassing inaccessible proprietary information and accessing application embeddings data (7:25-36 and Fig. 1) to determining user interest of at least one user, based on interactions of the at least one user with the user's user device, (“methods for generating user embeddings utilizing an interaction-to-vector neural network. For example, a user embeddings system transforms unorganized data of user interactions with content items into structured user interaction data….”, abstract) and providing recommendations to the at least one user's user device based on the determined user interest of the at least one user, (“The user embeddings system also improves computer efficiency. Indeed, by more accurately and precisely identifying relationships between user interactions and content items, …. for predicting a task …”, 5:15-20. “…the term “content item” refers to digital data (e.g., digital data that may be transmitted over a wired or wireless network). In particular, the term “content item” includes text, images, video, audio and/or audiovisual data. Examples of digital content include images, text, graphics, messages animations, notifications, advertisements, reviews, summaries, as well as content related to a product or service…”, 5:30-35. “…the term “user interaction” refers to contact from the user with respect to a content item corresponding to a product or service offered by an entity, such as an individual, group, or business. Examples of user interactions include visiting a website, receiving an email, opening an email, clicking on a link in an email, making a purchase, downloading a native computing application, or downloading, opening, viewing, selecting, playing viewing, pausing, stopping, skipping, continuing viewing, closing, moving, ignoring, resizing, and sharing a content item etc…”, 5:40-50) the method comprising: executing code by a computer system to cause the computer system to perform operations (Fig. 1 and associated disclosure) comprising: accessing embeddings data associated with installed application data on a user device, wherein the embeddings data represents installed applications on the user device; (“FIGS. 4A-4D and FIG. 5 described various embodiments of training an interaction-to-vector neural network and generating user embeddings for users. Accordingly, the actions and algorithms described in connection with FIGS. 4A-4D and FIG. 5 provide example structure for performing a step for generating user embeddings for the plurality of users based on the organized user interaction data to obtain homogenous embedding representations from heterogeneous user interaction data. More particularly, the actions and algorithms described in training the interaction-to-vector neural network 400 with respect to FIGS. 4A-4D as well as using the trained interaction-to-vector neural network 400 to obtain user embeddings with respect to FIG. 5 can provide structure for performing a step for generating user embeddings for the plurality of users based on the organized user interaction data to obtain homogenous embedding representations from heterogeneous user interaction data”, 21:63-65 and 22:1-13 and Figs. 4A-4D and 5); a) identifying user embeddings, by a first embeddings identifier module , for the at least one user (“… As mentioned above, the user embeddings system can train the interaction-to-vector neural network to learn user embeddings. As used herein, the terms “user embeddings” or “user embedding representations” refer to a vector of numbers/features that represent the behavior of the user encoded in a pre-defined dimension. The features can be learned by the interaction-to-vector neural network. In one or more embodiments, the features comprise latent features. In various embodiments, the number/pre-defined dimension of representative features in a user embedding can be a hyperparameter of the interaction-to-vector neural network and/or learned throughout training the interaction-to-vector neural network…”, 6:59-67 and 7:1-5) based on: 1) a first applications data, (“User interactions primarily occur via one or more digital media channels (e.g., network-based digital distribution channels). For instance, user interaction is created when a user interacts with a content item via an electronic message, a web browser, or an Internet-enabled application. Examples of digital media channels also include email, social media, webpages, organic search, paid search, and, in-app notifications”, 5:49-56. “…the components 606-620 of the user embeddings system 104 may, for example, be implemented as one or more operating systems, as one or more stand-alone applications, as one or more modules of an application, as one or more plug-ins, as one or more library functions or functions that may be called by other applications, and/or as a cloud-computing model. Thus, the components 606-620 may be implemented as a stand-alone application, such as a desktop or mobile application. Furthermore, the components 606-620 may be implemented as one or more web-based applications hosted on a remote server. The components 606-620 may also be implemented in a suite of mobile device applications or “apps…”, 24:53-65) wherein the first applications data comprise ..[apps]… installed on the user device of the at least one user (“…a suite of mobile device applications or “apps.” ..”, 24:65-65 and Fig. 6. “… if each user in the user interaction data is represented within a vector of the user's user interaction data…”, 6:7-9 ) the first applications data being determined based on one or more first set of [features ], by an applications data determination module; (“… the user embeddings system can utilize the user embeddings as feature…”, 4:56-56. “…the user embeddings system can train the interaction-to-vector neural network to learn user embeddings. As used herein, the terms “user embeddings” or “user embedding representations” refer to a vector of numbers/features that represent the behavior of the user encoded in a pre-defined dimension. ..”, 6:59-65. “…Referring now to FIG. 6, additional detail will be provided regarding capabilities and components of the user embeddings system 104 in accordance with one or more embodiments. In particular, FIG. 6 shows a schematic diagram of an example architecture of the user embeddings system 104 [Examiner interprets as applications data determination module] …”, 23:11-17. See also 1:20-30. “…For example, in one or more embodiments, as described above, the user embeddings system 104 utilizes the learned features and weights of the first weighted matrix as the user embeddings 620 for each user…”, 24:25-28) 2) selecting app embeddings, by an app embeddings selection module, for the applications installed on the user device of the at least one user, wherein the app embeddings are identified for each application of a plurality of applications installed on each user device of a plurality of user devices of a plurality of users; (“…the components 606-620 of the user embeddings system 104 may, for example, be implemented as one or more operating systems, as one or more stand-alone applications, as one or more modules of an application, as one or more plug-ins, as one or more library functions or functions that may be called by other applications, and/or as a cloud-computing model. Thus, the components 606-620 may be implemented as a stand-alone application, such as a desktop or mobile application. Furthermore, the components 606-620 may be implemented as one or more web-based applications hosted on a remote server. The components 606-620 may also be implemented in a suite of mobile device applications or “apps.” To illustrate, the components 606-620 may be implemented in an application,…”, 24:53-65) 3) representing each of the selected app embeddings as unique vectors; “(FIGS. 4A-4D and FIG. 5 described various embodiments of training an interaction-to-vector neural network and generating user embeddings for users. Accordingly, the actions and algorithms described in connection with FIGS. 4A-4D and FIG. 5 provide example structure for performing a step for generating user embeddings for the plurality of users based on the organized user interaction data to obtain homogenous embedding representations from heterogeneous user interaction data. More particularly, the actions and algorithms described in training the interaction-to-vector neural network 400 with respect to FIGS. 4A-4D as well as using the trained interaction-to-vector neural network 400 to obtain user embeddings with respect to FIG. 5 can provide structure for performing a step for generating user embeddings for the plurality of users based on the organized user interaction data to obtain homogenous embedding representations from heterogeneous user interaction data”, 21:63-65 and 22:1-13 and Figs. 4A-4D and 5); and, 4) performing weighted mean pooling on the selected app embedding, by a pooling module 130, for each application installed on the user device of the at least one user 105 along with weights, wherein the weights comprise the … [interation data] data associated with frequency ….an application installed on the user device 110 of the at least one user 105: (“ FIG. 5 illustrates utilizing a weighted matrix to identify learned user embeddings [Examiner interprets as data associated with a frequency of updating] in accordance with one or more embodiments”, 2:62-64. “…each time a user performs an interaction with a content item, the user embeddings system may add the user to a database or table of user interactions [Examiner interprets as data associated with a frequency of updating]. Because the user decides the amount, frequency, and type of each interaction, the user interaction data can include a number of users that have different numbers and types of interactions…”, 3:55-60. “ To illustrate, FIG. 5 shows the user embeddings system 104 utilizing a weighted matrix to identify the learned user embeddings. More particularly, FIG. 5 illustrates an example embodiment of the user embeddings system 104 identifying user embeddings from a weighted matrix (e.g., the first weighted matrix) and storing the user embeddings. As shown, FIG. 5 includes an identified user 502, an identified user vector 504, a weighted matrix 506, and an identified user embeddings 508. FIG. 5 also shows a User Embeddings Table 510 identified for all of the users”, 21:1-10); b) deriving user interest representation, by the user interest identification module, based on identified user embeddings for determining user interest of the at least one user; (“ … the user embeddings system 104 can store the user interaction data [examiner interprets as user interest representation], such as in a table or database. For example, in association with each user interaction, the user embeddings system 104 stores a user identifier, a content item identifier, an interaction type, and an interaction timestamp. In one or more embodiments, the interaction type is stored as a Boolean flag set for a particular interaction type. In alternative embodiments, the interaction type indicates the trigger event that detected the user interaction (e.g., hover over, click, close, stop, pause, or play…”, 9:63-67 and 10:1-5); c) and providing …[advertisement]…to the at least one user's user device, by a recommendation module, based on determined user interest of the at least one user. (“… user embeddings system 104 can further organize the user interaction table 300 to add additional structure to the user interaction data. To illustrate, as shown in FIG. 3C, the user embeddings system 104 arranges each content item group based on the interaction type 320. In various embodiments, the user embeddings system 104 arranges the interaction types 320 based on the strength and similarity of each interaction type relative to one another for a content item. To illustrate, FIG. 3C shows each interaction type 320 including a number in parentheses indicating the interaction strength (i.e., an interaction strength factor) of the corresponding interaction. For instance, within the first content item (e.g., email or a message), a user click yields a stronger user interaction type (i.e., 4) than opening the message (i.e., 2). With respect to the second content item (e.g., a video advertisement), a user viewing the entire video has a higher interaction strength factor (i.e., 8) than viewing 20-30 seconds of the video (i.e., 6) or skipping the video (i.e., 1…”, 12: 1-20). Although Bathia is very strong in generating user embeddings “..The user embeddings system also improves computer efficiency. Indeed, by more accurately and precisely identifying relationships between user interactions and content items, …. for predicting a task …”, 5:15-20. To illustrate, FIG. 5 shows the user embeddings system 104 utilizing a weighted matrix to identify the learned user embeddings. More particularly, FIG. 5 illustrates an example embodiment of the user embeddings system 104 identifying user embeddings from a weighted matrix (e.g., the first weighted matrix) and storing the user embeddings. As shown, FIG. 5 includes an identified user 502, an identified user vector 504, a weighted matrix 506, and an identified user embeddings 508. FIG. 5 also shows a User Embeddings Table 510 identified for all of the users”, 21:1-10. Bathia does not expressly teach a first list of applications first set of attributes with a frequency of updating an application recommendations However, HREHA discloses “…as an app listing…”, paragraph 64. first set of attributes , “[0104] In FIG. 7A, an example format of the recommendation record 600 includes an application name 604-1, an application identifier (ID) 604-2, an actions list 604-3, and application attributes 604-4. The recommendation record 600 generally represents data that can be stored in the recommendation data store 512 for a specific application. The recommendation data store 512 may include thousands or millions of records having the structure specified by FIG. 7A”, paragraph 104. a frequency of updating an application, “0010] In other features, for a first record of the first data store corresponding to an application from the set of applications installed on the user device, the application recommendation request is triggered in response to the metadata being updated to add an additional action”, paragraphs 10 and 22. Recommendations, “[0039] FIG. 7A is a graphical representation of an example recommendation record format. [0040] FIG. 7B is a graphical representation of another example recommendation record format. [0041] FIG. 7C is a graphical representation of an example recommendation record according to the format of FIG. 7A. [0085] The recommendation request 404 may include a listing of installed apps 404-1. The installed apps 404-1 may include an exhaustive list of all installed applications…”, see at least paragraphs 39, 40, 41 and 85 and Figs 7A, 7B and 7C. “…Examples of application attributes are displayed at 644-4 and include developer, reviews, ratings, genre, number of downloads [Examiner interprets as a frequency of updating an application]…”, paragraph 112 and Fig. 7C. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate HREHA’s teaching with the teaching of Bathia. One would have been motivated to provide functionality to store data elements such as “a first list of applications, first set of attributes, a frequency of updating an application” in order to process and identify results for application recommendation (see HREHA abstract and paragraph 104 and Figs. 7A, 7B and 7C). As to claim 12, it comprises the same limitation than claim 1 therefore is rejected in similar manner. further the claim comprises the system 100 comprising a processor in communication with a memory, the memory comprising modules (see Fig. 1 and associated disclosure). As to claims 2 and 13, Bhatia discloses wherein the one or more first set of attributes (see first weighted matrix in Fig. 4A and associated disclosure) comprises: a) a timestamp of a time of installation of each application on the first list of applications of the at least one user 105, (“…the time of the interaction (e.g., an interaction timestamp…”, 3:48-49 and Fig. 3A and associated disclosure) and b) user device id and app id of each application on the first list of applications of the at least one user 105. (“ a user identifier, 3:45-45. And “…For example, in one or more embodiments, the user embeddings system 104 groups entries in the user interaction table 300 based on the content item (e.g., content item ID 310)….”, 11: 44-46). As to claim 3, Bhatia discloses comprising a step of training a second embeddings identifier module 155 for identifying app embeddings (see “first weighted matrix 414 and second weighted matrix 424”, 14:45-60 and Fig. 4A. See also Fig. 5 and associated disclosure), for each application of the plurality of applications installed on the user devices of each of the plurality of users, for categorizing similar applications (“…the user embeddings system can partition the user interaction data into groups or subsets according to content item (e.g., a content item group[Examiner interprets as the plurality of applications]). Further, the user embeddings system can organize the user interaction data in each content item group …”, 3:30-35. See also “…system 104 can further organize the user interaction table 300…”, 12:1-20. “…the user embeddings system orders and arranges the user interaction data into a hierarchy structure [for categorizing]…”, 4: 22-23). As to claims 4 and 14, Bhatia discloses comprising executing the code by the computer system to cause the computer system to perform an operation further comprising: training the second embeddings identifier module to identify app embeddings, for each application of the plurality of applications installed on the user devices of each of the plurality of users to categorize two applications being installed in sequence within a predetermined time period. (“…Using this data, the disclosed systems can determine interaction types and interaction times for each user. Further, the disclosed systems can partition the user interaction data based on content items such that various types of interactions with each content item are grouped together. In addition, for each content item group, the disclosed systems can further organize the user interaction data into a hierarchy structure based on the interaction type as well as the interaction time…”, 2:25-35). As to claims 5 and 15, Bhatia discloses wherein method of training the second embeddings identifier module for identifying app embeddings for each application of the plurality of applications installed (see “first weighted matrix 414 and second weighted matrix 424”, 14:45-60 and Fig. 4A. See also Fig. 5 and associated disclosure), on the user devices of each of the plurality of users comprises the steps of: a) capturing a second applications data for each application of the plurality of applications installed on the user devices of each of the plurality of users wherein each application of the plurality of applications is analyzed, by analysis module, based on one or more second set of attributes (see at least Fig. 3A and associated disclosure) comprising: aa) a timestamp of a time of installation of each application of the plurality of applications installed on the user devices of each of the plurality of users, (see element 330 Fig. 3A) and ab) user device id of each of the plurality of users and app id of each of the installed applications installed on the plurality of user devices; (see element 340 User ID [Examiner interprets as user device id] and element 310, Content ID [Examiner interprets as app id]) b) creating a second list of applications, using a sequence of app ids sorted based on the timestamp; (“…the user interaction table 300 is initially ordered based on the interaction timestamp 330 of each user interaction …”, 11: 30-32 and Fig. 3A); c) creating a co-occurrence graph using the second list of applications for identifying app embeddings for each application of the plurality of applications installed on the user devices of each of the plurality of users; (“…As described in detail below, the user embeddings system 104 trains an interaction-to-vector neural network to learn user embeddings utilizing the ordered list of user identifiers. In various embodiments, the interaction-to-vector neural network follows the architecture of a word2vector neural network, such as the skip-gram model. In word2vector neural networks, systems organize data by words in documents. In some embodiments, documents are akin to the content items IDs 310 and words are akin to the User IDs 340 (e.g., users that interact with a content item)…”, 13:35-45); d) and identifying app embeddings by converting the created co-occurrence graph into the app embeddings. (“…the user embeddings system 104 further organizes and sorts user interaction data based on additional categories, such as the user interaction type 320 and the interaction timestamp 330. Thus, the encoding of contextual information among the User IDs 340 is much richer and results in user embeddings…”, 13:45-52. See also element 510 in Fig. 5 and associated disclosure). As to claims 9, Bhatia discloses wherein the user interest of the at least one user 105 comprises user's temporal interest and user's dynamic behavioural features. (“…, in some embodiments, the user embeddings system arranges each subgroup of interaction types by interaction time (e.g., temporal ordering”, paragraph 4:7-12. “…To demonstrate, users interact with content items in various ways in their own capacities. For example, one user performs a single interaction with a single content item while, during the same time period, another user performs multiple various interactions with multiple content items. Because each user's behavior and interaction with content items are different [dynamic behavioural features] , the user interaction data for each user can appear vastly different from one user to the next…”, 1:47-55). As to claims 10, Bhatia discloses comprising providing recommendations to at least one user device comprises recommending at least one app that matches the identified user interest of the at least one user 105. (“The user embeddings system also improves computer efficiency. Indeed, by more accurately and precisely identifying relationships between user interactions and content items, …. for predicting a task …”, 5:15-20. “…the term “content item” refers to digital data (e.g., digital data that may be transmitted over a wired or wireless network). In particular, the term “content item” includes text, images, video, audio and/or audiovisual data. Examples of digital content include images, text, graphics, messages animations, notifications, advertisements, reviews, summaries, as well as content related to a product or service…”, 5:30-35. “…the term “user interaction” refers to contact from the user with respect to a content item corresponding to a product or service offered by an entity, such as an individual, group, or business. Examples of user interactions include visiting a website, receiving an email, opening an email, clicking on a link in an email, making a purchase, downloading a native computing application, or downloading, opening, viewing, selecting, playing viewing, pausing, stopping, skipping, continuing viewing, closing, moving, ignoring, resizing, and sharing a content item etc…”, 5:40-50. Bathia does not expressly teaches the word recommendations However, HREHA discloses “…as an app listing…”, paragraph 64. Recommendations, “[0039] FIG. 7A is a graphical representation of an example recommendation record format. [0040] FIG. 7B is a graphical representation of another example recommendation record format. [0041] FIG. 7C is a graphical representation of an example recommendation record according to the format of FIG. 7A. [0085] The recommendation request 404 may include a listing of installed apps 404-1. The installed apps 404-1 may include an exhaustive list of all installed applications…”, see at least paragraphs 39, 40, 41 and 85 and Figs 7A, 7B and 7C. “…Examples of application attributes are displayed at 644-4 and include developer, reviews, ratings, genre, number of downloads [Examiner interprets as a frequency of updating an application]…”, paragraph 112 and Fig. 7C. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate HREHA’s teaching with the teaching of Bathia. One would have been motivated to provide functionality to store data elements such as “a first list of applications, first set of attributes, a frequency of updating an application” in order to process and identify results for application recommendation (see HREHA abstract and paragraph 104 and Figs. 7A, 7B and 7C). As to claims 11, Bhatia discloses using determined user interest for user segmentation and further using user segmentation for targeted advertising. (“…Upon obtaining the user interaction data, the user embeddings system 104 can begin to partition, segment, or separate the user interaction data into subsections or groups. For example, in one or more embodiments, the user embeddings system 104 groups entries in the user interaction table 300 based on the content item (e.g., content item ID 310…”, 11:40-46. “…to illustrate, in one or more embodiments, the user embeddings system 104 utilizes the user embeddings to perform various use cases like clustering segmentation, segment expansion, and as input to other deep learning/traditional predictive models…”, 21:55-62). Claims 6-8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No. 11461634 (Bhatia) in view of US PG. PUB. No. 20160299977 (HREHA) and in view of US Patent No. 10762436 (Green). As to claims 6 and 16, Bhatia discloses wherein the co-occurrence graph comprises: a) a plurality of nodes, (“…interaction-to-vector neural network is a word2vec type of neural network, such as a neural network employing a skip-gram architecture…”, 4:40-44 and Fig. 2 and associated disclosure) wherein each node represents an app id of each application installed on the user devices of each of the plurality of users, (“… the user embeddings system 104 trains an interaction-to-vector neural network to learn user embeddings utilizing the ordered list of user identifiers. In various embodiments, the interaction-to-vector neural network follows the architecture of a word2vector neural network, such as the skip-gram model. In word2vector neural networks, systems organize data by words in documents. In some embodiments, documents are akin to the content items IDs 310 and words are akin to the User IDs 340 (e.g., users that interact with a content item)….”, 13:35-45 and Fig. 3D); b) a plurality of edges, wherein each edge represents a two applications installed in sequence within a predetermined time period, and (“…the user embeddings system 104 further organizes and sorts user interaction data based on additional categories, such as the user interaction type 320 and the interaction timestamp 330. Thus, the encoding of contextual information among the User IDs 340 is much richer and results in user embeddings that more accurately reflect a user's patterns, traits, habits, and behaviors with respect to interactions with content items’, 13:45-53 and Fig. 3A “…FIG. 3A shows a user interaction table 300 of user interaction data that indicates user interactions with content items. ..”, 10: 64-65. “…For example, the analytics system 102 and/or the user embeddings system 104 monitors interaction data that includes, but is not limited to, data requests (e.g., URL requests, link clicks), time data (e.g., a timestamp for clicking a link, a time duration for a web browser accessing a webpage, a timestamp for closing an application, time duration of viewing or engaging with a content item…”, 8: 67-67 and 9: 1-5. “…the user embeddings system 104 further organizes and sorts user interaction data based on additional categories, such as the user interaction type 320 and the interaction timestamp 330. Thus, the encoding of contextual information among the User IDs 340 is much richer and results in user embeddings that more accurately reflect a user's patterns, traits, habits, and behaviors with respect to interactions with content items…”, 13: 45-53); c) a weight of each of the plurality of edges, wherein the weight of the edge is the number of user devices of a plurality of users on which the two applications are installed in sequence within a predetermined time period. (“ To illustrate, FIG. 5 shows the user embeddings system 104 utilizing a weighted matrix to identify the learned user embeddings. More particularly, FIG. 5 illustrates an example embodiment of the user embeddings system 104 identifying user embeddings from a weighted matrix (e.g., the first weighted matrix) and storing the user embeddings. As shown, FIG. 5 includes an identified user 502, an identified user vector 504, a weighted matrix 506, and an identified user embeddings 508. FIG. 5 also shows a User Embeddings Table 510 identified for all of the users”, 21:1-10. “…For example, the analytics system 102 and/or the user embeddings system 104 monitors interaction data that includes, but is not limited to, data requests (e.g., URL requests, link clicks), time data (e.g., a timestamp for clicking a link, a time duration for a web browser accessing a webpage, a timestamp for closing an application, time duration of viewing or engaging with a content item…”, 8: 67-67 and 9: 1-5 and 10: 64-65). But, Bathia does not expressly disclose Edges and nodes However, Green discloses “…training module 208 is configured to embed sequence information using a modified skip-gram technique…”, 6: 55-58 “…the social network can be represented by a graph, i.e., a data structure including edges and nodes….”, 11:15-17. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Green’s teaching with the teaching of Bathia. One would have been motivated to provide an structure that explicitly teaches nodes and edges in order to provide “least one model that is trained using a skip-gram ..technique“, (see Green abstract). As to claim 7, Bhatia does not disclose but Green discloses wherein edges are directional (“Connections in the social networking system 630 are usually in both directions…”, 11:43-44. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Green’s teaching with the teaching of Bathia. One would have been motivated to provide an structure that explicitly teaches direction of edges in order to provide “least one model that is trained using a skip-gram ..technique“, (see Green abstract). As to claims 8, Bhatia does not disclose but Green discloses comprising retraining the second embeddings identifier module 155 based on a change in edges of the co-occurrence graph, wherein the change in edge represents the change in two applications being installed in sequence within a predetermined time period. (“…an edge in the social graph is generated connecting a node representing the first user and a second node representing the second user. As various nodes relate or interact with each other, the social networking system 630 modifies edges connecting the various nodes to reflect the relationships and interactions…”, 12:44-48); Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Green’s teaching with the teaching of Bathia. One would have been motivated to provide an structure that explicitly teaches modifying edges in order to facilitate “In various embodiments, the user embeddings system 104 trains one or more of these neural network layers 612 through via back propagation, as described above in connection with FIGS. 4A-4D”, (Bathia 24:9-12). Response to Arguments Applicant’s arguments of 8/11/2025 have been very carefully considered but are not persuasive. objection of claim 14 is withdrawn because Applicant amendment. Arguments directed to 101 rejection are moot in view of the above Examiner’s note Applicant argues (remarks 14-17) Claim Rejections - 35 U.S.C. § 103 I. Claims 1-5 and 9-15 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Bhatia, U.S. Patent No. 11,461,634 ("Bhatia"), in view of HREHA, U.S. PG. PUB. No. 2016/0299977 ("HREHA"). Applicants respectfully traverse the rejection…. II. Claims 6-8 and 16 stand rejected under 35 U.S.C. § 103(a) as unpatentable over Bhatia, U.S. Patent No. 11,461,634 ("Bhatia"), in view of HREHA, U.S. PG. PUB. No. 2016/0299977 ("HREHA"), in view of Green, U.S. Patent No. 10,762,436 ("Green"). Applicants respectfully traverse the rejection…. Bhatia does not disclose a purpose of the embeddings data and, thus, does not enable any use of the embeddings data associated with installed application data. HREHA also does not disclose "accessing embeddings data associated with installed application data on a user device, wherein the embeddings data represents installed applications on the user device" nor "representing each of the selected app embeddings as unique vectors." Furthermore, Bhatia in combination with HREHA fail to teach or suggest the specific purposes of the embeddings data representing installed applications on a user device… In response, the Examiner respectfully notes that applicant has not provided persuasive rebuttal evidence to overcome the prima facie case. Further, the elements of this instant Application are old and well known at the time of the invention. The combination set for the rejection produce results that are predictable. The Examiner assets that the prima facie established in this case teaches all the claims, evidence have been provided for each one of the limitations, see rejection above. Regarding to the allegation “Bhatia in combination with HREHA fail to teach or suggest the specific purposes of the embeddings data representing installed applications on a user device”, the Examiner asserts that the references have motivation to be combined because they not only are in the same field of endeavor, app recommendations , but also because the prior art suggests the desirability of the combination. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. “Classification of Mobile Applications with rich information”. IEEE. 2015. This publication discloses “Mobile Application activates an important role in the daily lives of mobile users. Intuitively, the study of the use of mobile Apps can help to understand the user favorites, such as App recommendation, user segmentation and target advertising”. US. Pg. Pub. 20210201149, “METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR EMBEDDING USER APP INTEREST”. This publication discloses subject matter that is very close to the instant claims and can be used in future prosecution, “A method, an apparatus, a device and a storage medium for embedding user app interest are provided. The method includes: acquiring a user existing app installation list and a user app installation list within a predetermined time window, where the app includes app ID information and app category information; inputting the existing app installation list and the app installation list within the predetermined time window into a pre-trained user app interest embedding model to obtain a user app interest embedding vector. By combining the user existing app installation list information and the user recent app installation list information, the user app interest embedding vector may simultaneously reflect the user long-term interest and the user short-term interest”, (abstract) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA VICTORIA VANDERHORST whose telephone number is (571)270-3604. The examiner can normally be reached on business hours from Monday through Friday from 8:30 AM to 4:30 PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashraf Waseem can be reached on 571-270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIA V VANDERHORST/Primary Examiner, Art Unit 3621 11/5/2025
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
Feb 06, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591905
SYSTEMS, METHODS AND COMPUTER PROGRAM PRODUCTS FOR PROVIDING AND VERIFYING PURCHASE OFFERS
2y 5m to grant Granted Mar 31, 2026
Patent 12555290
PROACTIVELY-GENERATED CONTENT CREATION BASED ON TRACKED PERFORMANCE
2y 5m to grant Granted Feb 17, 2026
Patent 12548043
QUERY-PRODUCT INTERFACE FOR ECOMMERCE PLATFORM
2y 5m to grant Granted Feb 10, 2026
Patent 12548049
CELEBRITY-BASED AR ADVERTISING AND SOCIAL NETWORK
2y 5m to grant Granted Feb 10, 2026
Patent 12541774
PRUNING FIELD WEIGHTS FOR CONTENT SELECTION
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
48%
Grant Probability
86%
With Interview (+37.8%)
3y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 579 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month