Prosecution Insights
Last updated: April 17, 2026
Application No. 18/668,357

ARTIFICIAL INTELLIGENCE BASED EVENT GENERATION METHOD AND SYSTEM FOR GENERATING MEMOIR EVENTS BASED ON INFORMATION ASSOCIATED WITH USERS

Non-Final OA §101§103
Filed
May 20, 2024
Examiner
OGUNBIYI, OLUWADAMILOL M
Art Unit
2653
Tech Center
2600 — Communications
Assignee
unknown
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 12m
To Grant
96%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
236 granted / 304 resolved
+15.6% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
31 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
20.1%
-19.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
12.1%
-27.9% vs TC avg
§112
13.7%
-26.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 304 resolved cases

Office Action

§101 §103
DETAILED ACTION Claims 1 – 20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 5, 12 and 18 are objected to because of the following informalities: The first, second and third limitations of these claims recite: ‘… one or more events occurred during the life and experiences …’ which the Examiner believes should instead be --one or more events that occurred during the life and experiences-- Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 4-6, 8, 11-13, 15, 17-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. Claims 1, 8 and 15 provides teaching for an AI-based event generation which obtains inputs from an electronic device associated with one or more users, the input being associated with one or more of prompts, life and experiences with the one or more users, tokenising the received input information to be converted into one or more words, sub-words and characters, converting the tokens into embeddings as numerical representations which are one or more of semantic or syntactic information associated with the tokens, assigning weights to the tokens to determine that relationships between the tokens based on an analysis of the importance of the tokens within the sequences of the inputs based on the embeddings as they’re processed by a neural network, generating one more second tokens based on the weights of the first tokens based on the one or more first tokens or second tokens using probability distribution applied on vocabularies of the one or more second tokens, the second tokens being generated until a predetermined optimum length of one or more sequences of the second tokens is reached, concatenating the second tokens to generate memoir events and then providing the generated memoir event as an output on a user interface. Nothing in the claims precludes the claims from being performed in the human mind. The entire process involves data gathering, data transformation, data analysis and data presentation. The process can be performed by having a human receive a user’s input as a prompt for information, the human writes down the user’s prompts, separates the prompt into first tokens which are one or more of words, sub-words or other characters, represent the tokens as particular numerical values based on sematic or syntactic information of the tokens and then assign particular weights or scores to the tokens so as to indicate the relationships between the tokens based on an importance of individual tokens within the sequence of the input, generate second tokens from the first tokens by making use of the assigned weights of the first tokens and a probability distribution applied on vocabularies of the second tokens such that the second tokens keep getting generated until an optimum threshold length of tokens is reached that would satisfy an intended sequence (such as a sentence or paragraph length), these second tokens then being merged together and then presented as one or more memoir events to the user who requested them. The mentioning of a hardware processor, memory and the non-transitory computer-readable storage medium simply serve as available hardware to be able to perform the claimed invention. The claims hereby recite a mental process. This judicial exception is not integrated into a practical application as the claims simply teach of collecting data through receiving the one or more inputs, transforming data to obtain tokens, embeddings and weights, analysing data to determine that a predetermined optimum length of the sequences has been reached, and presenting data in the form of concatenating second tokens and outputting the concatenated tokens on a user interface. The invention is not tied to any particular defining structure and simply provides instructions to apply the judicial exception. The techniques can be performed by a generic computer which would be presented as a tool to implement the abstract idea (classifiable as automation of a mental concept). The Specification in [0047] provides the use of several generic computers and the like, suitable to execute the claimed technique. This is presented in generic terms, such that a general-purpose computer could be used to address the claim limitations. The artificial intelligence model applied here could be at least one of Large Language model, Deep Learning model, Random Forest model, Decision-making model, Linear Regression model, Natural Language Processing (NLP) model, and the like [0074]. The Specification also has that the neural network may comprise a transformer [0068]. The claims provide the use of the artificial intelligence model for the purpose of generating a memoir event, without specifying which of these particular AI models is applied to the generation such that a random or general-purpose AI model could be applied to address the memoir event generation. The reader is also not informed in the claims of the particular neural network that is applied, leading to a possible application of any random neural network of the available neural networks. Mentioning an artificial intelligence model or a neural network can simply refer to the application of a general-purpose computer to perform a mental process. Generating memoir events could mentally be performed by a human who, upon receiving all the available information and a prompt to generate such making use of the available information, would then present a story or biography of a particular user according to the specifications provided by the user. The task of assigning weights to tokens based on relationships analysed by importance could also be mentally performed by a human, substituting the need for a neural network to perform such. The indicated user interface also can be performed as a presentation of the generated text on a piece of paper, this being presented to the intended recipient. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the invention is not tied to a practical application. The claims provide techniques that amount to no more than mere instructions that apply the judicial exception which can be performed by a generic device. While the claims make mention of an artificial intelligence and a neural network, these do not recite specifics on how the models are performed, and therefore still do not amount to significantly more than the mentioned judicial exception. Mere instructions to apply an exception using a generic device cannot provide an inventive concept. Claims 1, 8 and 15 are not eligible. Claims 4, 11 and 17 provide obtaining an input voice, determining an optimised voice cloning from amongst AI models of voice cloning, extracting emotions or accents from the input voice, applying the emotions or accents to a base speech model and generating the cloned voices based on applying one or more speech artifacts. A human may listen to a received human speech, determine the accents or emotions present in the speech, and make an attempt to mimic the voice of the received speech, making use of the determined accent or emotion that was present in the received speech, and speak in such a similar fashion. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 5, 12 and 18 provide teaching for receiving inputs that comprise one or more events that occurred during the life and experiences associated with users, retrieving information associated with the events based on training an AI model on data sources and providing the information associated with the events that occurred during the life and experiences of the users, in order to have the one or more users select the information associated with the one or more events. A human may receive input information regarding the life and experiences of users, obtain information associated with events that occurred during the life and experiences of the users, and then provide the obtained information to the users so they can select the information associated with the one or more events. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claims 6, 13 and 19 provide obtaining multi-media contents associated with the life and experiences of the users from the electronic devices of the users, analysing the multi-media contents to extract emotional parameters such as sentiments of being happy or sad that are associated with the users, and then integrating the emotional parameters into the memoir events. A human may receive multi-media content such as videos, images and audio that are associated with the life and experiences of the users, analyse these multi-media content to determine emotions observed within the multi-media content, and then include the determined emotions into the memoir events. This does not integrate any practical application nor does it provide any additional element sufficient to amount to more than the mentioned judicial exception. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 3, 8, 9, 10, 15 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over MASCHMEYER et al. (US 2024/0256764 A1: hereafter — Maschmeyer) in view of BOUVIER et al. (WO 2024/200170 A1: hereafter — Bouvier) further in view of SHRIVASTAVA et al. (US 2021/0374338 A1: hereafter — Shrivastava). For claim 1, Maschmeyer discloses an artificial intelligence based (AI-based) (Maschmeyer: [0084] — a dedicated artificial intelligence processor unit (for artificial intelligence processings)) event generation method for generating one or more memoir events based on one or more information associated with one or more users, the artificial intelligence based (AI-based) event generation method (Maschmeyer: [0006] — prompting an LLM to generate a list of object attributes; [0023] — having a prompt to obtain a generated description (the generated description being taken as the claimed memoir event)) comprising: obtaining, by one or more hardware processors (Maschmeyer: [0081] — employing the use of processors), one or more inputs from one or more electronic devices associated with the one or more users, wherein the one or more inputs comprise the one or more information related to the one or more users, and wherein the one or more information is associated with at least one of: one or more prompts, and life and experiences associated with the one or more users (Maschmeyer: [0082] — receiving natural language inputs as prompts, which can be an example of a desired output (indicating that it is information related to the requesting user)); tokenizing, by the one or more hardware processors, the one or more information related to the one or more users, to convert the one or more information into one or more first tokens being analyzed by an artificial intelligence (AI) model, using a tokenization process, wherein the one or more first tokens comprise at least one of: one or more words, one or more sub-words, and one or more characters, based on the tokenization process (Maschmeyer: [0075] — parsing the inputs into tokens to get shorter segments represented by words or character); converting, by the one or more hardware processors, each token into one or more embeddings, wherein the one or more embeddings are one or more numerical representations comprising at least one of: one or more semantic and syntactic information associated with the one or more first tokens (Maschmeyer: [0076] — converting the tokens into embeddings that are learned numerical representations of the tokens, and they capture the semantic meaning to the text segments represented by the tokens); wherein the one or more second tokens are generated until the artificial intelligence (AI) model generates a predetermined optimum length of one or more sequences of the one or more second tokens (Maschmeyer: [0087] — setting an output to have a minimum token length of 10 tokens or a maximum of 1000 tokens (as the desired predetermined optimum length)). The reference of Maschmeyer provides teaching for obtaining an input, tokenising it and converting it into embeddings but differs from the claimed invention in that the claimed invention further provides teaching for assigning weights to each of the tokens to determine relationships between them while analysing an importance of the tokens in the inputs. This is however not new to the art as the reference of Bouvier is now introduced to teach this as: assigning, by the one or more hardware processors, one or more weights to each token to determine one or more relationships between the one or more first tokens, upon analyzing an importance of the one or more first tokens in one or more sequences of the one or more inputs based on the one or more embeddings being processed at a neural network architecture of the artificial intelligence (AI) model (Bouvier: page 12 line 25 – page 13 line 11 — the presence of a transformer network which is a type of neural network architecture that applies a self-attention mechanism used to create an attention score for each input token based on its relationship with other tokens of the input sequence, which is then subsequently used to weigh the importance of different tokens when making predictions about an output sequence, and also using the attention score to weigh the importance of each token when encoding it into a hidden representation). Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Bouvier which creates a score for each token based on its relationship to other tokens in a sequence for the purpose of weighing the importance of the tokens, with the teaching of Maschmeyer which provides obtaining and tokenising and input to then represent it as an embedding for the purpose of text generation, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of obtaining weights of the available tokens to determine the words/tokens that would be relevant for the next processing stage. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Maschmeyer in view of Bouvier provides teaching for an artificial intelligence event generation involving the generation of a first set of tokens, but particularly fails to teach of generating a second set of tokens from the first, these tokens being used for the direct purpose of the event generation. This is however seen to be taught by the reference of Shrivastava as: generating, by the one or more hardware processors, one or more second tokens based on the one or more weights assigned to each token of the one or more first tokens, by determining one or more subsequent tokens associated with the one or more second tokens based on at least one of: the one or more first tokens and the one or more second tokens using probability distribution applied on vocabularies of the one or more second tokens (Shrivastava: [0029] — a situation whereby a current word of a summary (as the second token) is generated based on previous words, the probability distributions applied to the previous words (the words being tokens), a first probability distribution indicating selection probability value of each word of a first set of words (indicating the weight of the first token)), generating, by the one or more hardware processors, the one or more memoir events by concatenating the one or more second tokens (Shrivastava: [0081] — combining each of the selected words (as the respective next tokens) iteratively until a sentence ends (indicating the concatenation of words to make a sentence)); and providing, by the one or more hardware processors, an output of the generated one or more memoir events on a user interface associated with the one or more electronic devices of the one or more users (Shrivastava: [0081] — outputting the generated text summary; [0120] — a display for the output summary; [0028], [0072] — an application to story-telling (being an indication of generating a memoir event)). Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Shrivastava which provides generating a second set of tokens from a first set of tokens which are directly applied as a combination to generate the claimed memoir event, with the teaching of the combination of Maschmeyer in view of Bouvier which provides an artificial intelligence event generation involving the generation of a first set of tokens for the purpose of textual generation, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of applying the tokens present in the input query to generate suitable and useful tokens that would also be present in the output query, thereby ensuring a relevance of the final output sentences to the input query. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). For claim 2, claim 1 is incorporated and the combination of Maschmeyer in view of Bouvier further in view of Shrivastava discloses the artificial intelligence based (AI-based) event generation method, further comprising training, by the one or more hardware processors, the artificial intelligence (AI) model, by: obtaining, by the one or more hardware processors, one or more data from one or more data sources comprising at least one of: one or more books, one or more articles, and one or more websites (Shrivastava: [0038] — obtaining source information from a website); tokenizing, by the one or more hardware processors, one or more data to convert the one or more data into one or more third tokens, wherein the one or more third tokens comprise the one or more words, one or more parts of words, and one or more individual characters (Maschmeyer: [0075] — tokenisation to obtain words or characters (which can also include parts of words)); training, by the one or more hardware processors, the artificial intelligence (AI) model on the obtained one or more data using a supervised learning algorithm, wherein training the artificial intelligence (AI) model comprises learning of the artificial intelligence (AI) model to at least one of: determine at least one of: the one or more second tokens in the one or more sequences and provide missing one or more second tokens in the one or more sequences (Bouvier: Page 11: lines 26–27 — supervised learning training; Page 13 lines 13–16 — generating the tokens decoded to be output by the transformer network into a desired output (as the determination of the second tokens)); fine-tuning, by the one or more hardware processors, the trained artificial intelligence (AI) model on one or more task-specific datasets to optimize performance of the artificial intelligence (AI) model on one or more applications (Shrivastava: [0045] — performing fine-tunning in order to update a text-summary to for domain-specific requirements (task-specific for particular applications)); and evaluating, by the one or more hardware processors, the trained artificial intelligence (AI) model to assess capabilities of the artificial intelligence (AI) model in analyzing and generation of the one or more memoir events (Shrivastava: [0070] — evaluating the trained model in its generation of the required output in a particular domain (domain-specific language requirements, with [0072] providing an application to story-telling being an indication of generating a memoir event)). For claim 3, claim 2 is incorporated and the combination of Maschmeyer in view of Bouvier further in view of Shrivastava discloses the artificial intelligence based (AI-based) event generation method of claim 2, wherein the trained artificial intelligence (AI) model on the one or more task-specific datasets is fine-tuned using at least one of: beam search and reinforcement learning, based on a feedback on the performance of the artificial intelligence (AI) model (Shrivastava: [0045] — a fine-tuned decoder trained based on reinforcement learning). As for claim 8, system claim 8 and method claim 1 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Maschmeyer in [0084] provides a processor as well as memory suitable to address the limitations of this claim. Accordingly, claim 8 is similarly rejected under the same rationale as applied above with respect to method claim 1. As for claim 9, system claim 9 and method claim 2 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 9 is similarly rejected under the same rationale as applied above with respect to method claim 2. As for claim 10, system claim 10 and method claim 3 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 10 is similarly rejected under the same rationale as applied above with respect to method claim 3. As for claim 15, computer program product claim 15 and method claim 1 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Maschmeyer in [0165] provides teaching for a non-transitory computer-readable medium suitable to read upon the limitations of this claim. Accordingly, claim 15 is similarly rejected under the same rationale as applied above with respect to method claim 1. As for claim 16, computer program product claim 16 and method claim 2 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Accordingly, claim 16 is similarly rejected under the same rationale as applied above with respect to method claim 2. Claims 4, 11 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer (US 2024/0256764 A1) in view of Bouvier (WO 2024/200170 A1) further in view of Shrivastava (US 2021/0374338 A1) as applied to claim 1, and further in view of Barry et al. (US 2023/0259719 A1: hereafter — Barry). For claim 4, claim 3 is incorporated and the combination of Maschmeyer in view of Bouvier further in view of Shrivastava provides teaching for the artificial intelligence based (AI-based) event generation method, further comprising: obtaining, by the one or more hardware processors, one or more voice based inputs from the one or more electronic devices associated with the one or more users (Shrivastava: [0120] — capturing voice input form a user at a user device). This combination however fails to teach the further limitations of this claim, for which the reference of Barry is now introduced to teach as: determining, by the one or more hardware processors, an optimized voice cloning based artificial intelligence model among one or more voice cloning based artificial intelligence models upon analyzing one or more cloned voices being identical to one or more voices associated with the one or more users (Barry: [0046] — AI voice cloning to present output voice that is able to match the voice of the user, this being selected so as to sound less machine-rendered and more human (this, as well as it being chosen to match the user is an indication of a selection from available models)); extracting, by the one or more hardware processors, at least one of: emotions and accents, associated with the one or more voice based inputs of the one or more users, using the optimized voice cloning based artificial intelligence model (Barry: [0046] — adjusting the accent to better match that of the user (to show an extraction of the user’s accent associated with the received user voice)); applying, by the one or more hardware processors, at least one of: the emotions and accents into a base speech model (Barry: [0046] — “[t]he audio is preferably also processed by one or more of an accent translator engine (which accents the translated message to make it sound less machine-rendered and more human) and/or a voice cloning engine (which adjusts pitch, tone, accent, etc. to better match that of the user, thereby making the user’s translated message sound as if it was actually spoken by the user” (teaching of applying a determined accent to a determined speech model so as to present audible message with the determined accent applied to it)); and generating, by the one or more hardware processors, the one or more cloned voices being identical to one or more voices associated with the one or more users by applying one or more speech artifacts with the base speech model (Barry: [0046] — “[t]he audio is preferably also processed by one or more of an accent translator engine (which accents the translated message to make it sound less machine-rendered and more human) and/or a voice cloning engine (which adjusts pitch, tone, accent, etc. to better match that of the user, thereby making the user’s translated message sound as if it was actually spoken by the user” (teaching of applying a determined accent to a determined speech model so as to present audible message with the determined accent applied to it as a speech artifact); [0052] — an audible message is presented in the user’s voice). The combination of Maschmeyer in view of Bouvier further in view of Shrivastava provides teaching for an AI generated memoir event as a story, presented in textual form, but differs from the claimed invention in that the claimed invention now further provides teaching for matching an input voice with a cloned voice that also includes an extracted emotion or accent. This is however not new to the art as the reference of Barry is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Barry which provides generating voice clone of a user who provides a voice input, the voice matching the input accent, with the teaching of the combination of Maschmeyer in view of Bouvier further in view of Shrivastava which provides teaching for an AI generated memoir event as a story, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of audibly presenting a generated story in the target user’s voice, matching the user’s accent. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 11, system claim 11 and method claim 4 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 11 is similarly rejected under the same rationale as applied above with respect to method claim 4. As for claim 17, computer program product claim 17 and method claim 4 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Accordingly, claim 17 is similarly rejected under the same rationale as applied above with respect to method claim 4. Claims 5, 12 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer (US 2024/0256764 A1) in view of Bouvier (WO 2024/200170 A1) further in view of Shrivastava (US 2021/0374338 A1) as applied to claim 1, further in view of Harris et al. (US 11,216,892 B1: hereafter — Harris). For claim 5, claim 1 is incorporated but the combination of Maschmeyer in view of Bouvier further in view of Shrivastava fails to disclose the limitations of this claim, for which the reference of Harris is now introduced to teach as the artificial intelligence based (AI-based) event generation method, further comprising: obtaining, by the one or more hardware processors, the one or more inputs from the one or more electronic devices associated with the one or more users, wherein the one or more inputs comprise one or more events occurred during the life and experiences associated with the one or more users (Harris: Col 1 line 53 – Col 2 line 18 — content being received from a posting-user in an online system whereby life events can be obtained (teaching of content items as the input which can contain one or more life events)); retrieving, by the one or more hardware processors, information associated with the one or more events occurred during the life and experiences based on training of the artificial intelligence (AI) model on the one or more data sources (Harris: Col 1 line 53 – Col 2 line 18 — a machine learning model that has been trained on content items to be able to classify the content items as life events (the entire content items that comprise the image, video, text or other data serve as the claimed one or more data sources that are associated with website information)); and providing, by the one or more hardware processors, the information associated with the one or more events occurred during the life and experiences, to the one or more electronic devices associated with the one or more users to adapt the one or more users to select the information associated with the one or more events (Harris: Col 1 line 53 – Col 2 line 18 — a machine learning model which determines that the content items should be classified as a life event in a particular category, and then this is provided to the user, giving the user the option to upgrade the content item into a life event item specific to a particular category (teaching of an adaptation to present the user with the option of selecting information associated with the one or more events)). The combination of Maschmeyer in view of Bouvier further in view of Shrivastava provides teaching for an AI event generation technique, but differs from the claimed invention in that the claimed invention further provides teaching for obtaining events that occur during a user’s life as input, retrieving information associated with the events, and presenting the information to the user so as to select the information associated with the one or more events. This is however not new to the art as the reference of Harris is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Harris which provides teaching for obtaining events that occur during a user’s life as input, retrieving information associated with the events, and presenting the information to the user so as to select the information associated with the one or more events, with the teaching of the combination of Maschmeyer in view of Bouvier further in view of Shrivastava which provides teaching for an AI event generation technique, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of ensuring that the user makes a selection of the life events that the user intends including in the generated summary/memoir of events based on events that are determined to be relevant by the AI model. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 12, system claim 12 and method claim 5 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 12 is similarly rejected under the same rationale as applied above with respect to method claim 5. As for claim 18, computer program product claim 18 and method claim 5 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Accordingly, claim 18 is similarly rejected under the same rationale as applied above with respect to method claim 5. Claims 6, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer (US 2024/0256764 A1) in view of Bouvier (WO 2024/200170 A1) further in view of Shrivastava (US 2021/0374338 A1), and further in view of further in view of Harris (US 11,216,892 B1) and further in view of YANG et al. (US 2024/0412029 A1: hereafter — Yang). For claim 6, claim 1 is incorporated but the combination of Maschmeyer in view of Bouvier further in view of Shrivastava fails to disclose the limitations of this claim, for which the reference of Harris is now introduced to teach as the artificial intelligence based (AI-based) event generation method, further comprising: obtaining, by the one or more hardware processors, one or more multi-media contents comprising one or more videos, one or more images, one or more audios, associated with the life and experiences of the one or more users, from the one or more electronic devices associated with the one or more users (Harris: Col 1 line 53 – Col 2 line 18 — content being received from a posting-user in an online system whereby life events can be obtained (teaching of content items as the input which can contain one or more life events); Col 5 lines 52–55 — content items including image, audio and video data). The combination of Maschmeyer in view of Bouvier further in view of Shrivastava provides teaching for an AI event generation technique, but differs from the claimed invention in that the claimed invention further provides teaching for obtaining multi-media contents associated with life and experiences of a user. This is however not new to the art as the reference of Harris is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Harris which provides obtaining multi-media content as input, with the teaching of the combination of Maschmeyer in view of Bouvier further in view of Shrivastava which provides teaching for an AI event generation technique, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of being able to generate textual content as summaries from other information associated with a user’s experiences other than only textual inputs. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). The combination of Maschmeyer in view of Bouvier further in view of Shrivastava and further in view of Harris fails to disclose the further limitations of this claim, for which the reference of Yang is now introduced to teach as the artificial intelligence based (AI-based) event generation method, further comprising: analyzing, by the one or more hardware processors, the one or more multi-media contents to extract one or more emotional parameters associated with the one or more users, wherein the one or more emotional parameters comprise at least one of: sentiment, happy, and sad, associated with the one or more users (Yang: [0035] — an emotion service is able to extract emotional sentiments of a user from audio and video data; [0077] — predicting emotional states of a user to include happy and sad); and integrating, by the one or more hardware processors, the one or more emotional parameters associated with the one or more users, into the one or more memoir events to optimize the one or more memoir events (Yang: [0102] — when the generative AI presents its output, it can include the detected emotion in emoji form; [0037], [0040] — integrating the detected emotions into a generative AI output (teaching of applying the detected sentiment in user input information to output that’s generated by a generative AI model)). The combination of Maschmeyer in view of Bouvier further in view of Shrivastava further in view of Harris provides teaching for an AI event generation technique which receives muti-media input related to the life and experiences of a user, but differs from the claimed invention in that the claimed invention further provides extracting emotional parameters from the multi-media items and integrating the extracted emotional parameters into the one or more memoir events to optimise them. This is however not new to the art as the reference of Yang is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Yang which extracts emotional features of a user as determined from a multi-media input content so as to include it in the AI generated content, with the teaching of the combination of Maschmeyer in view of Bouvier further in view of Shrivastava further in view of Harris which provides AI event generation based on received multi-media input related to the life and experiences of a user, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of delivering generated responses that convey proper emotions to put the listener in a similar position as that determined from the input content, ensuring a deeper connection with the listener. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 13, system claim 13 and method claim 6 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 13 is similarly rejected under the same rationale as applied above with respect to method claim 6. As for claim 19, computer program product claim 19 and method claim 6 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Accordingly, claim 19 is similarly rejected under the same rationale as applied above with respect to method claim 6. Claims 7, 14 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Maschmeyer (US 2024/0256764 A1) in view of Bouvier (WO 2024/200170 A1) further in view of Shrivastava (US 2021/0374338 A1), and further in view of Barry (US 2023/0259719 A1) as applied to claim 4, and further in view of Manteau-Rao et al. (US 20230360772 A1: hereafter — Manteau-Rao). For claim 7, claim 4 is incorporated but the combination of Maschmeyer in view of Bouvier further in view of Shrivastava and further in view of Barry fails to disclose the limitations of this claim, for which the reference of Manteau-Rao is now introduced to teach this as the artificial intelligence based (AI-based) event generation method, further comprising combining, by the one or more hardware processors, the one or more cloned voices with one or more cloned images of the one or more users to generate a virtual reality displaying that the one or more cloned images of the one or more users read the one or more memoir events in the one or more cloned voices of the one or more users, using one or more virtual reality based technologies (Manteau-Rao: [0069] — customising a virtual reality environment and having a user create a therapist avatar (representative of a cloned user image); [0103]–[0104] — performing voice cloning whereby a generated avatar may speak with a user’s friend’s voice while in a virtual reality environment, the avatar being able to communicate with synthesised voice). The combination of Maschmeyer in view of Bouvier further in view of Shrivastava and further in view of Barry provides teaching for the generation of a cloned user voice as well as the generation of memoir-events as a story. This however differs from the claimed invention in that the claimed invention further provides teaching the presence of a cloned image which reads the memoir events with the cloned voice in a virtual reality environment. This is however not new to the art as the reference of Manteau-Rao is seen to teach above. Hence, before the effective filing date of the claimed invention, one of ordinary skill in the art would have found it obvious to incorporate the known-teaching of Manteau-Rao which provides an avatar reading generated text using a cloned voice in a virtual reality environment, with the teaching of the combination of Maschmeyer in view of Bouvier further in view of Shrivastava and further in view of Barry which provides teaching for the generation of a cloned user voice as well as the generation of memoir-events as a story, to thereby come up with the claimed invention. The combination of both prior art elements would have provided the predictable result of audibly presenting a generated story in the target user’s voice within a virtual reality environment, to present the feeling of it being presented by the intended speaker. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 415-421, 82 USPQ2d 1385, 1395-97 (2007). As for claim 14, system claim 14 and method claim 7 are related as system and the method of using same, with each claimed element’s function corresponding to the claimed method step. Accordingly, claim 14 is similarly rejected under the same rationale as applied above with respect to method claim 7. As for claim 20, computer program product claim 20 and method claim 7 are related as computer program product storing executable instructions required for performing the claimed method steps on a computer. Accordingly, claim 20 is similarly rejected under the same rationale as applied above with respect to method claim 7. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure. Chen et al. (US 2025/0356196 A1) provides teaching for, as related to a text-generation process, generating a new token in a sentence based on a score for altering the text generation process, such that respective sampling probabilities can be updated by reweighting a probability distribution over a vocabulary [0030]. Dey et al. (US 2025/0322166 A1) provides teaching for a sequence generation model configured to insert an end token to terminate generation of a key phrase after a threshold number of content tokens have been generated for the key phrase [0047]. Cohen et al. (US 2025/0259078 A1) provides teaching for a GPT model which can output a probability distribution over a vocabulary for the subsequent token condition on the sequence of preceding tokens [0083]. SHU et al. (US 2020/0066262 A1) provides teaching for inputting a last generated word into a decoder to obtain probability distribution of a word to be generated next in a dictionary, and a word with a largest probability is used as the output, this being continued until a complete sentence gets generated [0177]. Liu et al. (US 2023/0335123 A1) provides teaching for linguistic analysis program 101 selects a previously generated voice image having the highest degree of similarly to the voice image generated for the user input as a matching voice image [0059]. Chang et al. (US 2017/0124447 A1) provides teaching for receiving an input query having linguistic terms that have collection of input tokens, with a deep-structured neural network that includes a first part that produces word embeddings associated with the respective input tokens, a second part that generates state vectors that capture context information associated with the input tokens, and a third part which distinguishes important parts of the input linguistic item from less important parts (Abstract). Any inquiry concerning this communication or earlier communications from the Examiner should be directed to OLUWADAMILOLA M. OGUNBIYI whose telephone number is (571)272-4708. The Examiner can normally be reached Monday - Thursday (8:00 AM - 5:30 PM Eastern Standard Time). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s Supervisor, PARAS D. SHAH can be reached at (571) 270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /OLUWADAMILOLA M OGUNBIYI/Examiner, Art Unit 2653 /Paras D Shah/Supervisory Patent Examiner, Art Unit 2653 11/28/2025
Read full office action

Prosecution Timeline

May 20, 2024
Application Filed
Nov 28, 2025
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579979
NAMING DEVICES VIA VOICE COMMANDS
2y 5m to grant Granted Mar 17, 2026
Patent 12537007
METHOD FOR DETECTING AIRCRAFT AIR CONFLICT BASED ON SEMANTIC PARSING OF CONTROL SPEECH
2y 5m to grant Granted Jan 27, 2026
Patent 12508086
SYSTEM AND METHOD FOR VOICE-CONTROL OF OPERATING ROOM EQUIPMENT
2y 5m to grant Granted Dec 30, 2025
Patent 12499885
VOICE-BASED PARAMETER ASSIGNMENT FOR VOICE-CAPTURING DEVICES
2y 5m to grant Granted Dec 16, 2025
Patent 12469510
TRANSFORMING SPEECH SIGNALS TO ATTENUATE SPEECH OF COMPETING INDIVIDUALS AND OTHER NOISE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
96%
With Interview (+18.6%)
2y 12m
Median Time to Grant
Low
PTA Risk
Based on 304 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month