Prosecution Insights
Last updated: April 19, 2026
Application No. 18/351,339

RECOMMENDATING PERSONALIZED DIGITAL DESIGN TEMPLATES UTILIZING CONTENT EMBEDDINGS

Non-Final OA §101§103
Filed
Jul 12, 2023
Examiner
SHECHTMAN, CHERYL MARIA
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
72%
Grant Probability
Favorable
1-2
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
215 granted / 300 resolved
+16.7% vs TC avg
Strong +28% interview lift
Without
With
+28.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
321
Total Applications
across all art units

Statute-Specific Performance

§101
22.2%
-17.8% vs TC avg
§103
34.8%
-5.2% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 300 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This communication is in response to Application filed on July 12, 2023. Claims 1-20 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/12/2023 is being considered by the examiner. The listing of references in the specification is not a proper information disclosure statement. 37 CFR 1.98(b) requires a list of all patents, publications, or other information submitted for consideration by the Office, and MPEP § 609.04(a) states, "the list may not be incorporated into the specification but must be submitted in a separate paper." Therefore, unless the references have been cited by the examiner on form PTO-892, they have not been considered. Specification The title of the invention has a typographical error. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: ‘Recommending Personalized Digital Design Templates Utilizing Content Embeddings’. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: ‘transformer’ and ‘sentence transformer’ in claims 1, 3, 9, 11, and 16. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. The instant specification describes the ‘transformer’ and ‘sentence transformer’ as transformer 304 which is a machine learning model that includes a neural network architecture [para 47]. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 16 recite: extracting metadata from digital design templates from a database of digital design templates; generating, utilizing a transformer, a plurality of embedding vectors for the digital design templates from the extracted metadata; generating, utilizing the transformer, a user embedding vector from one or more user events of a user; and identifying, utilizing a similarity search model, a subset of digital design templates from the database of digital design templates to recommend to the user by identifying one or more embedding vectors of the plurality of embedding vectors that satisfy a similarity threshold to the user embedding vector. Step 1: The claims as a whole fall within one or more statutory categories. Step 2A prong 1: At least claims 1 and 16 recite limitations that are abstract ideas. The limitation “generating a plurality of embedding vectors for the digital design templates from the extracted metadata” is a mental step that can be achieved by mentally determining and generating embedding vectors based on select metadata criteria of the digital design templates. Thus, the claimed limitation can be performed by the human mind. Similarly, the limitation “generating a user embedding vector from one or more user events of a user” is a mental step. Here, the generation of a user embedding vector can be performed by the human mind based on user event criteria. Furthermore, the limitation “identifying a subset of digital design templates from the database of digital design templates to recommend to the user by identifying one or more embedding vectors of the plurality of embedding vectors that satisfy a similarity threshold to the user embedding vector” is also a mental step. The step of identifying digital design templates that satisfy a user determined similarity threshold to recommend can be performed by the human mind. Step 2A prong 2: Claims 1 and 16 recite the limitation ‘extracting metadata from digital design templates from a database of digital design templates’. This limitation is an additional element and is insignificant extra-solution activity as retrieval of select metadata from digital design template data (i.e. mere data gathering) such as 'obtaining information' as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Furthermore, Claim 1 and 16 recite the following additional elements “at least one processing device”, “transformer” and “similarity search model”, note that these recited additional elements are a high-level recitation of generic computer components and software to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: the conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. With respect to the "extracting” limitation identified as insignificant extra-solution activity above, when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more. Therefore, the claims 1 and 16 as a whole do not change this conclusion and the claims are ineligible. Claim 9 recites: A system comprising: one or more memory components; and one or more processing devices coupled to the one or more memory components, the one or more processing devices to perform operations comprising: generating, utilizing a transformer, a plurality of embedding vectors for digital design templates by extracting metadata from the digital design templates; detecting, utilizing a real-time user event aggregation model, one or more user events of a user with respect to the digital design templates; assigning, utilizing an exponential decay function, weights to the one or more user events to generate one or more weighted user events; generating, utilizing the transformer, a user embedding vector from the one or more weighted user events; and identifying, utilizing a similarity search model, a subset of digital design templates from the digital design templates to recommend to the user by identifying one or more embedding vectors of the plurality of embedding vectors that satisfy a similarity threshold to the user embedding vector. Step 1: The claim as a whole falls within one or more statutory categories. Step 2A prong 1: At least claim 9 recites limitations that are abstract ideas. The limitation “generating a plurality of embedding vectors for digital design templates” is a mental step that can be achieved by mentally determining and generating embedding vectors based on some select criteria. Thus, the claimed limitation can be performed by the human mind. Similarly, the limitation “generating a user embedding vector from one or more weighted user events” is a mental step. Here, the generation of a user embedding vector can be performed by the human mind based on user event criteria. The limitation “detecting, one or more user events of a user with respect to the digital design templates” is a mental step that can be achieved by mentally detecting the occurrence of one or more user events with respect to the templates. Thus, the claimed limitation can be performed by the human mind. The limitation “assigning, utilizing an exponential decay function, weights to the one or more user events to generate one or more weighted user events” is also a mental step that the user can carry out by mentally assigning a weight to the user events. Furthermore, the limitation “identifying a subset of digital design templates from the digital design templates to recommend to the user by identifying one or more embedding vectors of the plurality of embedding vectors that satisfy a similarity threshold to the user embedding vector” is also a mental step. The step of identifying digital design templates that satisfy a user determined similarity threshold to recommend can be performed by the human mind. Step 2A prong 2: Claim 9 recites the limitation ‘extracting metadata from digital design templates’. This limitation is an additional element and is insignificant extra-solution activity as retrieval of select metadata from digital design template data (i.e. mere data gathering) such as 'obtaining information' as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Furthermore, Claim 9 recites the following additional elements “a system”, “one or more memory components”, “one or more processing devices”, “transformer”, “real-time user event aggregation model”, and “similarity search model”, note that these recited additional elements are a high-level recitation of generic computer components and software to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: the conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. With respect to the "extracting” limitation identified as insignificant extra-solution activity above, when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more. Therefore, the claim 9 as a whole does not change this conclusion and the claim is ineligible. Claims 2, 10 and 17 depend from claims 1, 9 and 16 and thus include all the limitations of claims 1, 9 and 16, therefore claims 2, 10 and 17 recite the same abstract ideas of "mental processes". Claims 2, 10 and 17 furthermore recite: wherein extracting the metadata further comprises: extracting from the digital design templates at least one of title, text, description, categories, topics, or tasks; and determining stylistic elements of the digital design templates based on at least one of the title, the text, the description, the categories, the topics, or the tasks. Step 1: Claims 2, 10 and 17 as a whole fall within one or more statutory categories. Step 2A prong 1: Claims 2, 10 and 17 recite limitations that are abstract ideas. The limitation “determining stylistic elements of the digital design templates based on at least one of the title, the text, the description, the categories, the topics, or the tasks” is a mental step. One can mentally make a determination of stylistic elements of the template based on a criteria. Thus, the claimed limitation can be performed by the human mind. Step 2A prong 2: Claims 2, 10 and 17 recite the limitation “wherein extracting the metadata further comprises: extracting from the digital design templates at least one of title, text, description, categories, topics, or tasks”. This extracting step further defines the extracting step in claims 1, 9 and 16 from which the claims depend and was considered as insignificant extra-solution activity as retrieval/receiving of data (i.e. mere data gathering) such as 'obtaining information' as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Step 2B: With respect to the "extracting” limitation identified as insignificant extra-solution activity above, when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more. Therefore, claims 2, 10 and 17 as a whole do not change this conclusion and the claims are ineligible. Claim 3 depends from claim 1 and thus include all the limitations of claim 1, therefore claim 3 recites the same abstract ideas of "mental processes". Claim 3 recites: wherein generating the plurality of embedding vectors further comprises: utilizing a sentence transformer to generate the plurality of embedding vectors that represent content within the digital design templates; and storing the plurality of embedding vectors in a database corresponding with the digital design templates. Step 1: Claim 3 as a whole falls within one or more statutory categories. Step 2A prong 1: Claim 3 recites limitations that are abstract ideas. The limitation “generate the plurality of embedding vectors that represent content within the digital design templates” is a mental step that can be achieved by mentally determining and generating embedding vectors based on content criteria. Thus, the claimed limitation can be performed by the human mind. Step 2A prong 2: Claim 3 recites the limitation “storing the plurality of embedding vectors in a database corresponding with the digital design templates”. This storing step is considered post-solution extra-solution activity and is using of a computer or other machinery in its ordinary capacity for tasks such as storing data and does not integrate a judicial exception into a practical application or provide significantly more. Furthermore, Claim 3 recites the following additional elements “a sentence transformer”, note that these recited additional elements are a high-level recitation of generic computer software to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: the conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. With respect to the “storing” limitation identified as insignificant extra-solution activity above when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), “iv. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93” and thus remains insignificant extra-solution activity that does not provide significantly more. Therefore, claim 3 as a whole does not change this conclusion and the claim is ineligible. Claims 4, 12 and 18 depend from claims 1, 9 and 16 and thus include all the limitations of claims 1, 9 and 16, therefore claims 4, 12 and 18 recite the same abstract ideas of "mental processes". Claims 4, 12 and 18 recite: detecting one or more events of the user in real-time by utilizing a user event stream aggregation model, wherein the one or more events include digital design template remixes, digital design template exports, and search queries performed by the user; and generating the user embedding vector from the detected one or more events. Step 1: Claims 4, 12 and 18 as a whole fall within one or more statutory categories. Step 2A prong 1: Claims 4, 12 and 18 recite limitations that are abstract ideas. The limitation “detecting one or more events of the user in real-time, wherein the one or more events include digital design template remixes, digital design template exports, and search queries performed by the user; and generating the user embedding vector from the detected one or more events” are mental steps that can be achieved by mentally detecting user events and generating embedding vectors based on the user event criteria. Thus, the claimed limitations can be performed by the human mind. Step 2A prong 2: Claims 4, 12 and 18 recites the following additional elements “a user event stream aggregation model”, note that these recited additional elements are a high-level recitation of generic computer software to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: The conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. Therefore, claims 4, 12 and 18 as a whole do not change this conclusion and the claims are ineligible. Claim 5 depends from claim 1 and thus include all the limitations of claim 1, therefore claim 5 recites the same abstract ideas of "mental processes". Claim 5 recites: generating a weight for each of the one or more user events of the user by utilizing an exponential decay function. Step 1: Claim 5 as a whole falls within one or more statutory categories. Step 2A prong 1: Claim 5 recites limitations that are abstract ideas. The limitation “generating a weight for each of the one or more user events of the user by utilizing an exponential decay function” is a mental step that can be achieved by a user mentally determining a weight for each user event by using the exponential decay function. Thus, the claimed limitation can be performed by the human mind. Step 2A prong 2: Claim 5 does not recite any additional elements that would integrate the judicial exception into a practical application. Step 2B: Claim 5 does not include any additional elements that would provide significantly more. Therefore, claim 5 as a whole does not change this conclusion and the claim is ineligible. Claim 6 depends from claim 5 and thus include all the limitations of claim 5, therefore claim 6 recites the same abstract ideas of "mental processes". Claim 6 recites: generating the weight for each of the one or more user events based on at least one of recency of the one or more user events or individualized geo-seasonal intent, wherein generating the weight based on individualized geo-seasonal intent comprises determining an individualized annual pattern of behavior; and identifying a number of digital design templates of the subset of digital design templates based on the weight for each of the one or more user events. Step 1: Claim 6 as a whole falls within one or more statutory categories. Step 2A prong 1: Claim 6 recites limitations that are abstract ideas. The limitations “generating the weight for each of the one or more user events based on at least one of recency of the one or more user events or individualized geo-seasonal intent, wherein generating the weight based on individualized geo-seasonal intent comprises determining an individualized annual pattern of behavior; and identifying a number of digital design templates of the subset of digital design templates based on the weight for each of the one or more user events” are mental steps that can be achieved by a user mentally determining a weight for each user event based on certain selection criteria and identifying a number of templates based on the determined weight. Thus, the claimed limitations can be performed by the human mind. Step 2A prong 2: Claim 6 does not recite any additional elements that would integrate the judicial exception into a practical application. Step 2B: Claim 6 does not include any additional elements that would provide significantly more. Therefore, claim 6 as a whole does not change this conclusion and the claim is ineligible. Claims 7 and 20 depends from claims 1 and 16 and thus include all the limitations of claims 1 and 16, therefore claims 7 and 20 recite the same abstract ideas of "mental processes". Claims 7 and 20 recite: wherein identifying the subset of digital design templates further comprises comparing, utilizing the similarity search model, the user embedding vector with the one or more embedding vectors for the digital design templates within a single latent space to determine in real-time which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector. Step 1: Claims 7 and 20 as a whole fall within one or more statutory categories. Step 2A prong 1: Claims 7 and 20 recite limitations that are abstract ideas. The limitation “identifying the subset of digital design templates further comprises comparing the user embedding vector with the one or more embedding vectors for the digital design templates within a single latent space to determine in real-time which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector” are mental steps that can be achieved by comparing user embedding vectors based on a selected similarity threshold. Thus, the claimed limitations can be performed by the human mind. Step 2A prong 2: Claims 7 and 20 recite the following additional elements “the similarity search model”, note that these recited additional elements are a high-level recitation of generic computer software to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: The conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. Therefore, claims 7 and 20 as a whole does not change this conclusion and the claims are ineligible. Claim 8 depends from claim 7 and thus include all the limitations of claim 7, therefore claim 8 recites the same abstract ideas of "mental processes". Claim 8 recites: wherein comparing the user embedding vector with the one or more embedding vectors for the digital design templates further comprises performing real-time pairwise distance computations across the single latent space to identify, in real-time, the subset of digital design templates. Step 1: Claim 8 as a whole falls within one or more statutory categories. Step 2A prong 1: Claim 8 recite limitations that are abstract ideas. The limitation “performing real-time pairwise distance computations across the single latent space to identify, in real-time, the subset of digital design templates” are mathematical calculations and fall under the ‘mathematical concepts’ grouping of abstract ideas. Step 2A prong 2: Claim 8 does not recite any additional elements that would integrate the judicial exception into a practical application. Step 2B: Claim 8 does not include any additional elements that would provide significantly more. Therefore, claim 8 as a whole does not change this conclusion and the claim is ineligible. Claim 15 depends from claim 9 and thus include all the limitations of claim 9, therefore claim 15 recites the same abstract ideas of "mental processes". Claim 15 recites: wherein identifying the subset of digital design templates further comprises: comparing, utilizing the similarity search model, the user embedding vector with the one or more embedding vectors for the digital design templates within a single latent space to determine pairwise distance computations; and determining which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector based on the pairwise distance computations. Step 1: Claim 15 as a whole falls within one or more statutory categories. Step 2A prong 1: Claim 15 recite limitations that are abstract ideas. The limitation “identifying the subset of digital design templates further comprises comparing the user embedding vector with the one or more embedding vectors for the digital design templates within a single latent space to determine pairwise distance computations; and determining which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector based on the pairwise distance computations” are mental steps that can be achieved by comparing user embedding vectors based on a selected similarity threshold based on distance computations. Thus, the claimed limitations can be performed by the human mind. Step 2A prong 2: Claim 15 recites the following additional elements “the similarity search model”, note that these recited additional elements are a high-level recitation of generic computer software to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: The conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. Therefore, claim 15 as a whole does not change this conclusion and the claims are ineligible. Claim 11 depends from claim 9 and thus include all the limitations of claim 9, therefore claim 11 recites the same abstract ideas of "mental processes". Claim 11 recites: wherein the operations further comprise: aggregating, utilizing the real-time user event aggregation model, the one or more user events into a combined user signal; and generating, utilizing the transformer, the user embedding vector from the combined user signal. Step 1: Claim 11 as a whole falls within one or more statutory categories. Step 2A prong 1: Claim 11 recite limitations that are abstract ideas. The limitation “generating, utilizing the transformer, the user embedding vector from the combined user signal” are mental steps that can be achieved by a user mentally generating an embedding vector from aggregated data (i.e. the combined user signal). The instant specification describes the combined user signal as containing aggregated data such as timestamp history of events or user events [para 37,80]. Thus, the claimed limitations can be performed by the human mind. Step 2A prong 2: Claim 11 recites the limitation ‘aggregating, utilizing the real-time user event aggregation model, the one or more user events into a combined user signal’. This limitation is an additional element and is insignificant extra-solution activity as retrieval and collection of user event or timestamp data (i.e. mere data gathering) such as 'obtaining information' as identified in MPEP 2106.05(g) and does not provide integration into a practical application. Furthermore, Claim 11 recites the following additional elements “transformer” and “real-time user event aggregation model”, note that these recited additional elements are a high-level recitation of generic computer software components to perform the mental process and applied on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application. Step 2B: The conclusions for the additional elements representing mere implementation using a computer are carried over and do not provide significantly more. With respect to the " aggregating” limitation identified as insignificant extra-solution activity above, when re-evaluated this element is well-understood, routine, and conventional as evidenced by the court cases in MPEP 2106.05(d)(II), "i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); … OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network);" and thus remains insignificant extra-solution activity that does not provide significantly more. Therefore, the claim 11 as a whole does not change this conclusion and the claim is ineligible. Claims 13, 14, and 19 depend from claims 9 and 16 and thus include all the limitations of claims 9 and 16, therefore claims 13, 14, and 19 recite the same abstract ideas of "mental processes". Claims 13, 14, and 19 recite: (claim 13) wherein assigning the weights further comprises assigning, utilizing the exponential decay function, a first weight to the digital design template export and a second weight to the search query performed by the user, wherein the first weight is greater than the second weight; (claim 14) wherein assigning the weights to the one or more user events further comprises identifying a number of digital design templates of the subset of digital design templates based on the weights for each of the one or more user events; and (claim 19) wherein the operations further comprise: generating a weight for each of the one or more user events of the user by utilizing an exponential decay function; and identifying a number of digital design templates of the subset of digital design templates based on the weight for each of the one or more user events. Step 1: Claims 13, 14, and 19 as a whole fall within one or more statutory categories. Step 2A prong 1: Claims 13, 14, and 19 recites limitations that are abstract ideas. The limitations “assigning/generating the weights further comprises assigning, utilizing the exponential decay function, a first weight to the digital design template export and a second weight to the search query performed by the user, wherein the first weight is greater than the second weight” and ” assigning the weights to the one or more user events further comprises identifying a number of digital design templates of the subset of digital design templates based on the weights for each of the one or more user events” are mental steps that can be achieved by a user mentally determining/generating weights to assign to templates and search queries and mentally identifying templates based on the weights for user events. Thus, the claimed limitations can be performed by the human mind. Step 2A prong 2: Claims 13, 14, and 19 do not recite any additional elements that would integrate the judicial exception into a practical application. Step 2B: Claims 13, 14, and 19 do not include any additional elements that would provide significantly more. Therefore, claims 13, 14, and 19 as a whole do not change this conclusion and the claims are ineligible. To expedite a complete examination of the instant application, the claims rejected under 35 U.S.C. 101 (nonstatutory} above are further rejected as set forth below in anticipation of applicant amending these claims to place them within the four statutory categories of the invention. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 2, 7, 8, 16, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over US PgPub 2021/0174164 by Hsieh et al (hereafter Hsieh), and further in view of US PgPub 2023/0325391 by Li et al (hereafter Li). Referring to claim 16, Hsieh discloses a non-transitory computer readable medium storing executable instructions which, when executed by at least one processing device [processor readable storage medium 1005, processors 1002A-N, para 210, Fig 21], cause the at least one processing device to perform operations comprising: extracting metadata from content items from a database of content items [content repository 110 is populated from data from a data source looking to make use of the system, e.g. product related data, content related data or crawled data may be uploaded to the content repository, para 47, see Fig 1]; generating, utilizing a transformer, a plurality of embedding vectors for the content items from the extracted metadata [wherein a content neural network (C-NN) 120 functions as a neural network model of items constituting incorporation and mapping of content data items in relation to each other, para 49; C-NN 120 can process actual semantic information embedded within a photo media content item to identify key objects within, or from text media to identify important concepts and keywords, para 51; a transformer may be used to enrich the content embeddings, para 51; content items are represented as embeddings or vectors, para 76; processors 1002A-N include Machine learning/Deep learning processing units such as a Tensor Processing unit, para 211, Fig 21]; generating, utilizing the transformer, a user embedding vector from one or more user events of a user [wherein a user embedding 130 is created from user feature data input into a user neural network model (U-NN) 130 and processed through a matchmaking neural network model in a shared dimensional space to yield a user shared-item embedding S220, para 79-80, Fig 1; Fig 5, elements S210-S220; user shared-item embeddings represent user interaction event data, para 60, 76; wherein a single item-related neural network can be used in place of a C-NN120 and a U-NN 130, para 61]; and identifying, utilizing a similarity search model, a subset of content items from the database of content items to recommend to the user by identifying one or more embedding vectors of the plurality of embedding vectors that satisfy a similarity threshold to the user embedding vector [matchmaking neural network (MmNN) 140 maps the user embeddings and content embeddings of the U-NN 130 and C-NN 120 models into a shared vector space, para 64-65; the MmNN 140 model converts the content and user embeddings into content and user shared-item embeddings respectively, para 65; wherein an analysis of a shared-item embedding is applied in the MmNN 140 model with respect to a user shared-item embedding in selecting at least one content item, para 79, Fig 4, element S140, Fig 5, element S230; analysis of shared-item embeddings include identifying one or more shared-item embeddings satisfying a proximity condition such as a count of content-shared item embeddings within a specified distance threshold from a user shared-item embedding to determine content or users with some degree of similarity, para 128; Euclidean distance thresholds, para 181-183]. Referring to claim 16, while Hsieh discloses all of the above claimed subject matter, and also discloses that the content items include photo/graphical items [para 50], it remains silent as to the graphical content items specifically being digital design templates. Li discloses that visual content library 150 includes a vast library of visual assets comprises a template library 158 that includes templates containing image, icon and illustration content types [see Fig 1B, element 158, para 43-46]. Hsieh and Li are analogous art that are directed to the same field of endeavor- the analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the photo/graphical content items of Hsieh with the templates of Li because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the image, icon and illustration template assets of Li further refine the type of photo/graphical content items taught by Hsieh. Referring to claim 1, the limitations of the claim are similar to those of claim 16 in the form of a method [Hsieh, Abstract]. As such, claim 1 is rejected for the same reasons as claim 16. Referring to claim 2, Hsieh/Li discloses that extracting the metadata from the digital design templates comprises extracting at least one of title of digital design templates, text, description, categories, topics, or tasks [Hsieh, content title/name, description, categories, tags, authors, para 100]. Referring to claim 7, Hsieh/Li discloses that identifying the subset of digital design templates further comprises comparing, utilizing the similarity search model, the user embedding vector with the one or more embedding vectors for the digital design templates within a single latent space to determine which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector [Hsieh, matchmaking neural network (MmNN) 140 maps the user embeddings and content embeddings of the U-NN 130 and C-NN 120 models into a shared vector space, para 64-65; the MmNN 140 model converts the content and user embeddings into content and user shared-item embeddings respectively, para 65; wherein an analysis of a shared-item embedding is applied in the MmNN 140 model with respect to a user shared-item embedding in selecting at least one content item, para 79, Fig 4, element S140, Fig 5, element S230; analysis of shared-item embeddings include identifying one or more shared-item embeddings satisfying a proximity condition such as a count of content-shared item embeddings within a specified distance threshold from a user shared-item embedding to determine content or users with some degree of similarity, para 128; Euclidean distance thresholds, para 181-183]. Referring to claim 8, Hsieh/Li discloses that comparing the user embedding vector with the one or more embedding vectors for the digital design templates further comprises performing real-time pairwise distance computations across the single latent space to identify, in real-time, the subset of digital design templates [Hsieh, distance computation between user/subject and contents/object pairs, para 46; Li, real-time personalization, para 164]. Referring to claim 17, Hsieh/Li discloses that extracting the metadata further comprises: extracting from the digital design templates at least one of title, text, description, categories, topics, or tasks [Hsieh, content title/name, description, categories, tags, authors, para 100]; and determining stylistic elements of the digital design templates based on at least one of the title, the text, the description, the categories, the topics, or the tasks [Hsieh, colors, para 100; content properties collected for a particular piece of content can change depending on the content type and objectives for content suggestions, para 100]. Referring to claim 20, Hsieh/Li discloses that identifying the subset of digital design templates further comprises comparing, utilizing the similarity search model, the user embedding vector with the one or more embedding vectors for the digital design templates to determine in real-time which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector [Hsieh, matchmaking neural network (MmNN) 140 maps the user embeddings and content embeddings of the U-NN 130 and C-NN 120 models into a shared vector space, para 64-65; the MmNN 140 model converts the content and user embeddings into content and user shared-item embeddings respectively, para 65; wherein an analysis of a shared-item embedding is applied in the MmNN 140 model with respect to a user shared-item embedding in selecting at least one content item, para 79, Fig 4, element S140, Fig 5, element S230; analysis of shared-item embeddings include identifying one or more shared-item embeddings satisfying a proximity condition such as a count of content-shared item embeddings within a specified distance threshold from a user shared-item embedding to determine content or users with some degree of similarity, para 128; Euclidean distance thresholds, para 181-183; Hsieh, distance computation between user/subject and contents/object pairs, para 46; Li, real-time personalization, para 164]. Claims 9-11, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Hsieh, in view of Li, and further in view of US PGPub 2019/0114417 by Subbarayan et al (hereafter Subbarayan). Referring to claim 9, Hsieh discloses a system [computing system, para 207, 209, Fig 21] comprising: one or more memory components [processor readable storage medium 1005, para 210, Fig 21] ; and one or more processing devices coupled to the one or more memory components, the one or more processing devices to perform operations comprising [processors 1002A-N, para 210-212, Fig 21]: generating, utilizing a transformer, a plurality of embedding vectors for digital design templates by extracting metadata from the digital design templates [wherein a content neural network (C-NN) 120 functions as a neural network model of items constituting incorporation and mapping of content data items in relation to each other, para 49; C-NN 120 can process actual semantic information embedded within a photo media content item to identify key objects within, or from text media to identify important concepts and keywords, para 51; a transformer may be used to enrich the content embeddings, para 51; content items are represented as embeddings or vectors, para 76; processors 1002A-N include Machine learning/Deep learning processing units such as a Tensor Processing unit, para 211, Fig 21; content repository 110 is populated from data from a data source looking to make use of the system, e.g. product related data, content related data or crawled data may be uploaded to the content repository, para 47, see Fig 1]; detecting one or more user events of a user with respect to the digital design templates [a user device is configured to transmit data captured locally during use of relevant application(s) (i.e. in real-time) to a local or remote ML algorithm and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the ML algorithm, para 33]; assigning weights to the one or more user events to generate one or more weighted user events [wherein the item matchmaking model (CML) applies a rank-based weighting scheme such as weighted approximate rank pairwise loss to the items in the shared embedding space to characterize a user’s relative preference for different items, wherein preference indicates an item liked by the user, para 109]; generating, utilizing the transformer, a user embedding vector from the one or more weighted user events [wherein a user embedding 130 is created from user feature data input into a user neural network model (U-NN) 130 and processed through a matchmaking neural network model in a shared dimensional space to yield a user shared-item embedding S220, para 79-80, Fig 1; Fig 5, elements S210-S220; user shared-item embeddings represent user interaction event data, para 60, 76; wherein a single item-related neural network can be used in place of a C-NN120 and a U-NN 130, para 61]; and identifying, utilizing a similarity search model, a subset of digital design templates from the digital design templates to recommend to the user by identifying one or more embedding vectors of the plurality of embedding vectors that satisfy a similarity threshold to the user embedding vector [matchmaking neural network (MmNN) 140 maps the user embeddings and content embeddings of the U-NN 130 and C-NN 120 models into a shared vector space, para 64-65; the MmNN 140 model converts the content and user embeddings into content and user shared-item embeddings respectively, para 65; wherein an analysis of a shared-item embedding is applied in the MmNN 140 model with respect to a user shared-item embedding in selecting at least one content item, para 79, Fig 4, element S140, Fig 5, element S230; analysis of shared-item embeddings include identifying one or more shared-item embeddings satisfying a proximity condition such as a count of content-shared item embeddings within a specified distance threshold from a user shared-item embedding to determine content or users with some degree of similarity, para 128; Euclidean distance thresholds, para 181-183]. Referring to claim 9, while Hsieh discloses all of the above claimed subject matter, and also discloses that the content items include photo/graphical items [para 50], it remains silent as to the graphical content items specifically being digital design templates; that the user event aggregation model is a real-time model, and the weighting of the items utilizing an exponential decay function. Li discloses that visual content library 150 includes a vast library of visual assets comprises a template library 158 that includes templates containing image, icon and illustration content types [see Fig 1B, element 158, para 43-46] and that a user device is configured to transmit data captured locally during use of relevant application(s) (i.e. in real-time) to a local or remote ML algorithm and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the ML algorithm [para 33]. Hsieh and Li are analogous art that are directed to the same field of endeavor- the analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the photo/graphical content items of Hsieh with the templates of Li and to modify the input user feature data to be done through a real-time model because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the image, icon and illustration template assets of Li further refine the type of photo/graphical content items taught by Hsieh. Furthermore, the capturing of data through the real-time aggregation of Li further defines the method through which user feature data is input in Hsieh. Furthermore, with respect to claim 9, while Hsieh/Li discloses all of the above claimed subject matter, and also discloses: assigning weights to the one or more user events to generate one or more weighted user events [Hsieh, wherein the item matchmaking model (CML) applies a rank-based weighting scheme such as weighted approximate rank pairwise loss to the items in the shared embedding space to characterize a user’s relative preference for different items, wherein preference indicates an item liked by the user, para 109], it remains silent as to the weighting of the items utilizing an exponential decay function. Subbarayan discloses utilizing an exponential decay function to derive weighting coefficients that increase in relative distance between two symbols in a sequence of API call items [para 52 and 54]. Hsieh, Li and Subbarayan are analogous art that are directed to the same field of endeavor- the analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the rank-based weighting scheme of Hsieh with the utilization of exponential decay function to derive weighting coefficients in Subbarayan because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the exponential decay weighting scheme of Subbarayan is a variation of a type of weighting scheme of the content items taught by Hsieh. Referring to claim 10, Hsieh/Li/Subbarayan discloses that extracting the metadata from the digital design templates further comprises extracting stylistic elements of the digital design templates based on at least one of title of digital design templates, text, description, categories, topics, or tasks [Hsieh, content properties including colors, content title/name, description, categories, tags, authors, para 100; Hsieh, content properties collected for a particular piece of content can change depending on the content type and objectives for content suggestions, para 100]. Referring to claim 11, Hsieh/Li/Subbarayan discloses that the operations further comprise: aggregating, utilizing the real-time user event aggregation model, the one or more user events into a combined user signal; and generating, utilizing the transformer, the user embedding vector from the combined user signal [Hsieh, wherein a user embedding created from user feature data input into a user neural network model (U-NN) 130 to yield a user shared-item embedding S220, para 79-80, Fig 1; Fig 5, elements S210-S220; user shared-item embeddings represent user interaction event data, para 60, 76; Li, user device is configured to transmit data captured locally during use of relevant application(s) (i.e. in real-time) to a local or remote ML algorithm and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the ML algorithm, para 33]. Referring to claim 14, Hsieh/Li/Subbarayan discloses that assigning the weights to the one or more user events further comprises identifying a number of digital design templates of the subset of digital design templates based on the weights for each of the one or more user events [Hsieh, weighted ranking is used to train the matchmaking model in analysis that returns a prioritized set of candidate content items based in part on the calculated personalization scores, para 127]. Referring to claim 15, Hsieh/Li/Subbarayan discloses that identifying the subset of digital design templates further comprises: comparing, utilizing the similarity search model, the user embedding vector with the one or more embedding vectors for the digital design templates within a single latent space to determine pairwise distance computations; and determining which embedding vectors of the one or more embedding vectors satisfy the similarity threshold to the user embedding vector based on the pairwise distance computations [Hsieh, matchmaking neural network (MmNN) 140 maps the user embeddings and content embeddings of the U-NN 130 and C-NN 120 models into a shared vector space, para 64-65; the MmNN 140 model converts the content and user embeddings into content and user shared-item embeddings respectively, para 65; wherein an analysis of a shared-item embedding is applied in the MmNN 140 model with respect to a user shared-item embedding in selecting at least one content item, para 79, Fig 4, element S140, Fig 5, element S230; analysis of shared-item embeddings include identifying one or more shared-item embeddings satisfying a proximity condition such as a count of content-shared item embeddings within a specified distance threshold from a user shared-item embedding to determine content or users with some degree of similarity, para 128; Euclidean distance thresholds, para 181-183]. Claims 3, 4 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Hsieh, in view of Li, as applied to claims 1 and 16, and further in view of US PGPub 2022/0012296 by Marey. Referring to claim 3, Hsieh/Li discloses all of the above claimed subject matter, and also discloses generating the plurality of embedding vectors that represent content within the digital design templates; and storing the plurality of embedding vectors in a database corresponding with the digital design templates [Hsieh, generated shared-item embeddings are stored as part of a matchmaking data model, para 65], it remains silent as to utilizing a sentence transformer to generate the embedding vectors. Marey discloses that a sentence embedding machine learning model is employed to recommend a semantically similar post and word/sentence embeddings are created from the text in social media post items [para 40-41]. Hsieh, Li and Marey are analogous art that are directed to the same field of endeavor- analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the generation of the shared item embeddings of Hsieh to include sentence embeddings created in Marey because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the sentence embeddings created from social media post content items in Marey further refines the type of content shared-item embeddings in Hsieh. Referring to claim 4, Hsieh/Li discloses all of the above claimed subject matter, and also discloses detecting one or more events of the user in real-time by utilizing a user event stream aggregation model [Hsieh, user device configured to transmit data captured locally during use of relevant application(s) (i.e. in real-time) to a local or remote ML algorithm and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the ML algorithm, para 33], wherein the one or more events include search queries performed by the user [Hsieh, product search results, para 135]; and generating the user embedding vector from the detected one or more events [Hsieh, user embedding created from user feature data input into a user neural network model (U-NN) 130 to yield a user shared-item embedding S220, para 79-80, Fig 1; Fig 5, elements S210-S220; user shared-item embeddings represent user interaction event data, para 60, 76]. However, it remains silent as to the one or more events of the user including digital design template remixes and digital design template exports. Marey discloses that social media post templates may be inputted or extracted from past social media post records including historical social media conversations and modified prior to providing the post as a recommended post [para 34-35]. Hsieh, Li and Marey are analogous art that are directed to the same field of endeavor- analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the user events of Hsieh to include the identification and modification of templates similar to a user’s social media posts in past historical social media conversations of Marey because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the modified templates in Marey further refine the type of user interaction event data in Hsieh. Referring to claim 18, Hsieh/Li discloses all of the above claimed subject matter, and also discloses detecting one or more events of the user in real-time by utilizing a user event stream aggregation model [Hsieh, user device configured to transmit data captured locally during use of relevant application(s) (i.e. in real-time) to a local or remote ML algorithm and provide supplemental training data that can serve to fine-tune or increase the effectiveness of the ML algorithm, para 33], wherein the one or more events include search queries performed by the user [Hsieh, product search results, para 135]. However, it remains silent as to the one or more events of the user including digital design template remixes and digital design template exports. Marey discloses that social media post templates may be inputted or extracted from past social media post records including historical social media conversations and modified prior to providing the post as a recommended post [para 34-35]. Hsieh, Li and Marey are analogous art that are directed to the same field of endeavor- analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the user events of Hsieh to include the identification and modification of templates similar to a user’s social media posts in past historical social media conversations of Marey because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the modified templates in Marey further refine the type of user interaction event data in Hsieh. Claims 5, 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Hsieh, in view of Li, as applied to claims 1 and 16 above, and further in view of Subbarayan. Referring to claim 5, Hsieh/Li discloses all of the above claimed subject matter, and also discloses: generating a weight for each of the one or more user events of the user [Hsieh, wherein the item matchmaking model (CML) applies a rank-based weighting scheme such as weighted approximate rank pairwise loss to the items in the shared embedding space to characterize a user’s relative preference for different items, wherein preference indicates an item liked by the user, para 109], however it remains silent as to the weighting of the items utilizing an exponential decay function. Subbarayan discloses utilizing an exponential decay function to derive weighting coefficients that increase in relative distance between two symbols in a sequence of API call items [para 52 and 54]. Hsieh, Li and Subbarayan are analogous art that are directed to the same field of endeavor- the analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the rank-based weighting scheme of Hsieh with the utilization of exponential decay function to derive weighting coefficients in Subbarayan because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the exponential decay weighting scheme of Subbarayan is a variation of a type of weighting scheme of the content items taught by Hsieh. Referring to claim 6, Hsieh/Li/Subbarayan discloses generating the weight for each of the one or more user events based on at least one of recency of the one or more user events or individualized geo-seasonal intent, wherein generating the weight based on individualized geo-seasonal intent comprises determining an individualized annual pattern of behavior [Hsieh, personalization score calculated by taking into account additional factors such as content trending properties (e.g. recent change in popularity) and incorporated into the reranking process used to filter or set candidate items, para 134; personalization score is based on comparisons of content items most recently visited, purchased, favorited or otherwise interacted with by the user, para 138; Subbarayan, weighting using exponential decay function, para 52,54]; and identifying a number of digital design templates of the subset of digital design templates based on the weight for each of the one or more user events [Hsieh, weighted ranking is used to train the matchmaking model in analysis that returns a prioritized set of candidate content items based in part on the calculated personalization scores, para 127]. Referring to claim 19, Hsieh/Li discloses all of the above claimed subject matter, and also discloses: generating a weight for each of the one or more user events of the user [Hsieh, wherein the item matchmaking model (CML) applies a rank-based weighting scheme such as weighted approximate rank pairwise loss to the items in the shared embedding space to characterize a user’s relative preference for different items, wherein preference indicates an item liked by the user, para 109] and identifying a number of digital design templates of the subset of digital design templates based on the weight for each of the one or more user events [Hsieh, weighted ranking is used to train the matchmaking model in analysis that returns a prioritized set of candidate content items based in part on the calculated personalization scores, para 127]. However it remains silent as to the weighting of the items utilizing an exponential decay function. Subbarayan discloses utilizing an exponential decay function to derive weighting coefficients that increase in relative distance between two symbols in a sequence of API call items [para 52 and 54]. Hsieh, Li and Subbarayan are analogous art that are directed to the same field of endeavor- the analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the rank-based weighting scheme of Hsieh with the utilization of exponential decay function to derive weighting coefficients in Subbarayan because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the exponential decay weighting scheme of Subbarayan is a variation of a type of weighting scheme of the content items taught by Hsieh. Claims 12 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Hsieh, in view of Li, in view of Subbarayan, as applied to claim 9, and further in view of Marey. Referring to claim 12, Hsieh/Li/Subbarayan discloses all of the above claimed subject matter, and also discloses detecting that the one or more user events include a search query performed by the user [Hsieh, product search results, para 135]. However, it remains silent as to the one or more events of the user including a digital design template export. Marey discloses that social media post templates may be inputted or extracted from past social media post records including historical social media conversations and modified prior to providing the post as a recommended post [para 34-35]. Hsieh, Li, Subbarayan and Marey are analogous art that are directed to the same field of endeavor- analysis of content items. It would have been obvious to one of ordinary skill in the art at the time of the claimed invention to modify the user events of Hsieh to include the modification of templates similar to a user’s social media posts in past historical social media conversations of Marey because it would achieve predictable results. The ordinary skilled artisan would have been motivated to make this modification because the modified templates in Marey further refine the type of user interaction event data in Hsieh. Referring to claim 13, Hsieh/Li/Subbarayan/Marey discloses that assigning the weights further comprises assigning, utilizing the exponential decay function, a first weight to the digital design template export and a second weight to the search query performed by the user, wherein the first weight is greater than the second weight [Hsieh, personalization score calculated by taking into account additional factors such as content trending properties (e.g. recent change in popularity) and incorporated into the reranking process used to filter or set candidate items, para 134; personalization score is based on comparisons of content items most recently visited, purchased, favorited or otherwise interacted with by the user, para 138; Subbarayan, weighting using exponential decay function involves contexts including positions further away from the symbol being analyzed associated with lesser weights than symbols positioned closer to the symbol being analyzed, para 54; Hsieh, product search results, para 135; Marey, modified social media post templates, para 34-35]. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHERYL M SHECHTMAN whose telephone number is (571)272-4018. The examiner can normally be reached on Mon-Fri: 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached on 571-270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CHERYL M SHECHTMANPatent Examiner Art Unit 2164 /C.M.S/ /AMY NG/Supervisory Patent Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

Jul 12, 2023
Application Filed
Mar 03, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554725
System and Method for Searching Electronic Records using Gestures
2y 5m to grant Granted Feb 17, 2026
Patent 12536201
SYSTEM AND METHODS FOR VARYING OPTIMIZATION SOLUTIONS USING CONSTRAINTS BASED ON AN ENDPOINT
2y 5m to grant Granted Jan 27, 2026
Patent 12530380
OBJECT DATABASE FOR BUSINESS MODELING WITH IMPROVED DATA SECURITY
2y 5m to grant Granted Jan 20, 2026
Patent 12524404
METHOD, DEVICE, AND SYSTEM WITH PROCESSING-IN-MEMORY (PIM)-BASED HASH QUERYING
2y 5m to grant Granted Jan 13, 2026
Patent 12493590
METHOD AND SYSTEM FOR DEDUPLICATING POINT OF INTEREST DATABASES
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+28.1%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 300 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month