Prosecution Insights
Last updated: April 19, 2026
Application No. 18/226,937

Virtual event platform event engagement

Final Rejection §101§103
Filed
Jul 27, 2023
Examiner
KONERU, SUJAY
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Hubilo Technologies Inc.
OA Round
2 (Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
421 granted / 722 resolved
+6.3% vs TC avg
Strong +37% interview lift
Without
With
+37.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
36 currently pending
Career history
758
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
2.0%
-38.0% vs TC avg
§112
7.4%
-32.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 722 resolved cases

Office Action

§101 §103
DETAILED ACTION This Final Office Action is in response to Applicant's amendments and arguments filed on October 24, 2025. Applicant has amended claims 1-10, 12-13. Currently, claims 1-13 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments The 35 U.S.C. 101 rejections of claims 1-13 are maintained in light of applicant’s amendments to claims 1-10, 12-13. The 35 U.S.C. 103 rejections of claims 1-13 are withdrawn in light of applicant’s amendments to claims 1-10, 12-13. Applicant’s amendments necessitated the new grounds for rejection in this office action. Response to Arguments Applicant’s remarks submitted on 10/24/25 have been considered but are not persuasive. Applicant argues on p. 7 of the remarks that the 101 rejections are improper. Examiner disagrees. Applicant argues on p. 8 of the remarks that the abstract idea is integrated into practical implementation by improving another technology, a virtual event computing platform. Examiner disagrees and notes that virtual event computing platform is a tool used for implementing the abstract idea as opposed to what its being improved. What is being improved is analytics related to engagement for an event. Applicant further argues that the ordered combination is not well understood, routine and conventional in the field. Examiner disagrees and notes that examiner has cited p. 4-5, 10-12 of applicant’s own specification to show these elements are conventional and further notes that the ordered combination of elements was properly considered and that the sequence of elements is conventional where data is received, analyzed and then the analysis is output. Therefore, the 101 rejections are maintained. Applicant further argues on p. 9 of the remarks that the 103 rejections are improper. Examiner disagrees. Applicant provides summaries of the primary references used. Examiner notes the machine learning is taught by the newly applied Sahasi reference. Jain teaches the rest of the amended language at, among other places, para [0038] and [0141] as shown in the rejection below. Therefore, the 103 rejections are maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are clearly drawn to at least one of the four categories of patent eligible subject matter recited in 35 U.S.C. 101 (method, hardware processor and a computer program product in a non-transitory computer-readable medium). Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claims 1, 12 and 13 recite the abstract idea of provisioning and managing a event at an event platform by receiving data defining a set of actions for the event, the set of actions each having an associated weighting, wherein at least a first action in the set has a weighting that differs from the weighting of a second action in the set and upon initiation of the event, receiving tracking data associated with participant activities with respect to the defined set of actions and determining an engagement score for the event using the received tracking data and the weightings for the set of actions wherein the engagement score for the event is computed by generating an event engagement score for each participant having participant activity with respect to the event platform actions as defined in the set of event actions and aggregating the event engagement scores so generated and outputting the engagement score for the event together with a comparison of the engagement score with respect to engagement scores from one or more other events. The claims are directed to a type of analytics related to engagement for an event. Under prong 1 of Step 2A, these claims are considered abstract because the claims are certain methods of organizing human activity including business relations. Applicant’s claims are organized human activity because the claims show receiving data related to participant activities with an event (human activity) and that data is organized by determining an engagement score from that human activity data. Under prong 2 of Step 2A, the judicial exception is not integrated into a practical application because the claims (the judicial exception and any additional elements individually or in combination such as virtual events, virtual event platform touch point actions, the received data having been learned by applying machine learning to historical event engagement data generated from past virtual events on the virtual event computing platform, wherein each participant activity consists of a virtual event platform touch point action that occurs on the virtual event computing platform as determined by a monitored interaction between a participant computing device and the virtual event computing platform and software-as-a-service infrastructure for provisioning and managing a virtual event, comprising: a set of hardware processors and computer memory holding computer program code executed by the one or more hardware processors, the computer program code comprising program code configured to perform steps and a computer program product in a non-transitory computer-readable medium, the computer program product comprising computer program code executable by a hardware processor to provision and manage a virtual event, the computer program code configured to perform steps) are not an improvement to a computer or a technology, the claims do not apply the judicial exception with a particular machine, the claims do not effect a transformation or reduction of a particular article to a different state or thing nor do the claims apply the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment such that the claims as a whole is more than a drafting effort designed to monopolize the exception. These limitations at best are merely implementing an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination such as virtual events, virtual event platform touch point actions, the received data having been learned by applying machine learning to historical event engagement data generated from past virtual events on the virtual event computing platform, wherein each participant activity consists of a virtual event platform touch point action that occurs on the virtual event computing platform as determined by a monitored interaction between a participant computing device and the virtual event computing platform and software-as-a-service infrastructure for provisioning and managing a virtual event, comprising: a set of hardware processors and computer memory holding computer program code executed by the one or more hardware processors, the computer program code comprising program code configured to perform steps and a computer program product in a non-transitory computer-readable medium, the computer program product comprising computer program code executable by a hardware processor to provision and manage a virtual event, the computer program code configured to perform steps (as evidenced by p. 4-5, 10-12 of applicant’s own specification) are well understood, routine and conventional in the field. Dependent claims 2-10 also do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements either individually or in combination are merely an extension of the abstract idea itself by further showing outputting additional information in associated with the engagement score, the additional information being generated by the event platform based at least in part on the engagement scores from the one or more other events and wherein the additional information includes a recommendation with respect to at least one of: the set of actions, and a weighting associated with a particular action in the set of actions and wherein the engagement score is a number and adjusting the engagement score based on a context analysis and wherein the context analysis adjusts the engagement score based on a comparison of a number of participants at the event compared to a total number of individuals registered to attend the event and wherein the context analysis adjusts the engagement score based on the availability of a given action during the event and wherein the context analysis adjusts the engagement score based on a frequency of the set of actions and adjusting the set of actions and their associated weightings for a subsequent event based at least in part on the engagement score. Dependent claims 2-11 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements individually or in combination such as virtual events, virtual event platform touch point actions, and wherein the adjusting is carried out in an automated manner (as evidenced by p. 4-5, 10-12 of applicant’s own specification) are well understood, routine and conventional in the field. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claims 1-6, 8-9, 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Jain et al. (US 2020/0153915 A1) (hereinafter Jain) in view of Chan et al. (US 2023/0239169 A1) (hereinafter Chan) in view of Sahasi et al. (US 20230004999 A1) (hereinafter Sahasi). Claims 1, 12-13: Jain, as shown, discloses the following limitations of claim 1: A method (and corresponding software-as-a-service infrastructure and computer program product - see para [0129], [0161]-[0169] and Figs 1, 8, showing equivalent computing functionality and structure for implementing steps ) for provisioning and managing a virtual event at a virtual event computing platform, comprising: receiving data defining a set of virtual event platform touch point actions for the virtual event, the set of actions each having an associated weighting (see para [0003]-[0004], "Techniques and systems are described to determine and facilitate participant engagement in an online session, such as an online learning session for education, distance learning, webinars, and courses. A computing device, such as a server computing device implemented in a cloud-based system, implements a participant engagement system that accurately determines engagement levels of participants at time intervals of an online session, and facilitates participant engagement, such as by communicating (e.g., in a conversation or a chat message) with low-engagement participants and a presenter of the online session. The participant engagement system obtains indicators of user actions from user devices of participants in an online session, such as from a client application running on the user devices at predetermined time intervals configured by a presenter of the online session. The user actions may include any suitable actions or inputs made by a participant during the online session, such as a user answer to a quiz, participation in a poll or chat, whether a participant has minimized a user interface via which the online session is displayed at a user device, whether a participant has disabled an audio device (e.g., a speaker), whether a participant has replayed content of the online session, and the like. The participant engagement system generates a mapping that maps user actions and content presented during the online session to a timeline of the online session, where the mapping is based on the indicators of the user actions and the mapping is generated to indicate the user actions and the content at time intervals of the timeline. At each time interval of the timeline, the participant engagement system generates an engagement score for each participant based on the mapping of the user actions and the content, such as by weighting user actions at each time interval, and may rank the participants based on their respective engagement scores. The participant engagement system identifies one or more low-engagement participants from their engagement scores, such as those participants in a bottom percentage of the ranking (e.g., bottom 15%)."), wherein at least a first action in the set has a weighting that differs from the weighting of a second action in the set (see para [0115], "In one example, engagement module 142 assigns an engagement score to each participant in an online session, and updates the engagement scores throughout the online session, such as at time intervals defined by presenter 108. Engagement module 142 may assign engagement scores based on a mapping generated by mapping module 138. For instance, engagement module 142 can assign weights to user actions indicated in a mapping and combine the weights, (e.g., average the weights, sum the weights, etc.) to determine a respective engagement score of a participant at each time interval."); upon initiation of and during the virtual event, receiving tracking data associated with participant activities with respect to the defined set of virtual event platform touch point actions (see para [0126], " By monitoring user actions during an online session, system 400 accurately tracks engagement levels of participants during an online session relative to content of the online session. Hence, system 400 can efficiently identify low-engagement participants having a high probability of dropping out of an online session, and actively communicate context for the online session (e.g., content and other participant identities) to increase a participant's engagement level in the online session. Furthermore, system 400 can communicate context of an online session to a presenter of an online session, so the presenter can take active steps to assist low-engagement participants during the online session, such as by revisiting a topic already presented. Accordingly, system 400 increases the usefulness of an online session by increasing the engagement level of low-engagement participants and preventing them from dropping out of the online session, making the online session efficient for the participants and the presenter."), wherein each participant activity consists of a virtual event platform touch point action that occurs on the virtual event computing platform as determined by a monitored interaction between a participant computing device and the virtual event computing platform (see para [0038], "Distance learning application 118 can record data indicative of any suitable user actions during an online session, such as minimizing a user interface that exposes the learning session, turning off a sound device, rewinding a part of the learning session, pausing a topic of the learning session, answering a quiz of the learning session, entering a selection in a poll of the learning session, entering text in a chat of the learning session, and the like. In one example, distance learning application 118 communicates indicators of user actions via network 106 to participant engagement system 114 of server 112 (this communication is illustrated in FIG. 1 with an arrow coupling distance learning application 118 to a monitoring module of participant engagement application 116). Additionally or alternatively, distance learning application 118 can include a copy of the participant engagement application 116."); determining an engagement score for the virtual event using the received tracking data and the weightings for the set of virtual event platform touch point actions (see para [0004], "At each time interval of the timeline, the participant engagement system generates an engagement score for each participant based on the mapping of the user actions and the content, such as by weighting user actions at each time interval, and may rank the participants based on their respective engagement scores."), wherein the engagement score for the virtual event is computed (i) by generating an event engagement score for each participant having participant activity with respect to the virtual event platform touch point actions as defined in the set of virtual event platform touch point actions, and (ii) aggregating the event engagement scores so generated (see para [0141], "A respective score for each participant in the online session at each time interval of the timeline is determined based on the mapping of the user actions, the respective score based on a weighting of the user actions for each participant at each time interval as indicated by the mapping and based on summing results of the weighting, the respective score indicating a respective level of participant engagement of each participant in the online session (block 606). For example, engagement module 142 determines a respective score for each participant in the online session at each time interval based on the mapping of the user actions, where the respective score is based on a weighting of the user actions for each participant at each time interval as indicated by the mapping and based on summing results of the weighting, the respective score indicating a level of participant engagement of each participant in the online session." and see para [0069]); and outputting the engagement score for the virtual event together with a comparison of the engagement score with respect to engagement scores from one or more other virtual events (see para [0024], "In one example, the participant engagement system compares engagement scores for each participant at each time interval to a threshold engagement score to determine the low-engagement participants having an engagement level in the online session below the threshold engagement level. Additionally or alternatively, the participant engagement system may rank participants according to their engagement scores, and a threshold percentage of the participants can be selected as participants having an engagement level in the online session below a threshold engagement level (e.g., low-engagement participants), such as those participants in the bottom 15% according to their engagement scores." and see para [0043], "Storage 122 also includes participation data 128, including data regarding participant engagement an online session, such as engagement scores of participants, rankings of participants, such as rankings according to engagement scores, groupings of participants, such as groupings of participants having low-, medium-, and high-engagement levels, content for which participants are determined to have low or high engagement, a time interval of the online session for which participants are determined to have low or high engagement, thresholds (e.g., threshold engagement levels, threshold scores, percentage thresholds used to determine a group of participants having low engagement, etc.), combinations thereof, and the like." where it is obvious to one of ordinary skill in the art that a ranking shows a comparison). Although Jain shows outputting the engagement score for the virtual event together with a comparison of the engagement score with respect to engagement scores from one or more other virtual event is obvious, it is not explicit. In analogous art, Chan discloses the following limitations: outputting the engagement score for the virtual event together with a comparison of the engagement score with respect to engagement scores from one or more other virtual event (see para [0113], " The analytics display 900 also includes a section displaying an engagement score, expressed in percentages, determined for each of the booths 908 based on the interaction data. For instance, booth 602b had the highest engagement score, 91%. The other booths 602a, c, d had lower engagement scores, ranging from 21% to 64%. Other example analytics might include, for example, overall satisfaction or end-to-end experience score. Further examples might include success scores for particular elements that are designed specially to engage participants." showing comparison of scores of different booths where the booths can be virtual events) It would have been obvious to one or ordinary skill in the art at the time of the invention to combine the teachings of Chan with Jain because enabling a comparison of different virtual events enables provides added analytics to engagement levels to improve business decisions (see Chan, para [0012]-[0014]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for virtual expo analytics as taught by Chan in the system of participant engagement detection and control of Jain, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Jain and Chan do not specifically disclose the received data having been learned by applying machine learning to historical event engagement data generated from past virtual events on the virtual event computing platform. In analogous art, Sahasi discloses the following limitations: the received data having been learned by applying machine learning to historical event engagement data generated from past virtual events on the virtual event computing platform (see para [0053], "For example, the analytics subsystem 142 may receive activity data indicative of a plurality of engagements of a user device with a plurality of media assets (e.g., digital content). The analytics subsystem 142 may receive the activity data via the client application 106 executing on the user device. Each of the plurality of media assets may comprise a plurality of content features, as further described herein. The analytics subsystem 142 may generate a UIC associated with that particular user and/or user device. The UIC may include at least one content feature of the plurality of content features (e.g., representing content features associated with content with which the user has engaged). The UIC may also include, as further described herein, at least one interest attribute representing a level of interest for each of the media assets consumed by the user/user device. As further described herein, the UIC can be used by a machine learning model to identify one or more of the media assets 166 that are likely to be of interest to a user corresponding to the UIC." and see para [0057]) It would have been obvious to one or ordinary skill in the art at the time of the invention to combine the teachings of Sahasi with Jain and Chan because using machine learning enable analysis on larger amounts of data to provide insights (see Sahasi, para [0001]-[0002]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the system for user segmentation and analysis as taught by Sahasi in the Jain and Chan combination, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 2: Jain does not specifically disclose outputting additional information in associated with the engagement score, the additional information being generated by the virtual event platform based at least in part on the engagement scores from the one or more other virtual events. In analogous art, Chan discloses the following limitations: outputting additional information in associated with the engagement score for the virtual event, the additional information being generated by the virtual event computing platform based at least in part on the engagement scores from the one or more other virtual events (see para [0113], "The analytics display 900 also includes a section displaying an engagement score, expressed in percentages, determined for each of the booths 908 based on the interaction data. For instance, booth 602b had the highest engagement score, 91%. The other booths 602a, c, d had lower engagement scores, ranging from 21% to 64%. Other example analytics might include, for example, overall satisfaction or end-to-end experience score. Further examples might include success scores for particular elements that are designed specially to engage participants."). It would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for virtual expo analytics as taught by Chan in the system of participant engagement detection and control of Jain, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 3-6: Further, Jain discloses the following limitations: wherein the additional information includes a recommendation with respect to at least one of: the set of virtual event platform touch point actions, and a weighting associated with a particular virtual event platform touch point action in the set of virtual event platform touch point actions (see para [0075], "Context preparation module 144 can determine content for increasing an engagement level of one or more low-engagement participants having an engagement level below a threshold engagement level based on the correlating participants. As an example, context preparation module 144 may identify from a mapping a participant that chatted with a low-engagement participant, and determine content to recommend to the low-engagement participant from a question asked by the participant who chatted with the low-engagement participant.") wherein the engagement score for the virtual event is a number (see para [0069], "In one example, engagement module 142 determines a respective engagement score for each participant in an online session. For instance, indicators of user actions obtained by monitoring module 136, such as listed in Table 1, for each participant are each assigned a respective weight by engagement module 142. An engagement score for a participant can be determined from the respective weights for the participant, such as by summing the respective weights, averaging the respective weights, forming a weighted combination of values assigned to user actions, and the like." where it is obvious to one of ordinary skill in the art that summed amounts would be represented as a number and see para [0072], showing engagement score represented as a percentage) wherein the set of for the virtual event actions include networking, visiting a virtual booth, and attending a session (see para [0022], " Accordingly, this disclosure describes systems, devices, and techniques for determining and facilitating participant engagement in online sessions, such as online learning sessions for education, distance learning, webinars, and courses. A computing device, such as a server computing device implemented in a cloud-based system, implements a participation engagement system that obtains indicators from user devices of participants in an online session. In one example, a distance learning application is enabled on client devices of users who participate in an online learning session, and supplies indicators of user actions to the participation engagement system on the server device. For instance, a distance learning application may record indicators of user actions and parameters of the client device during the online learning session. At predetermined times (e.g., at periodic intervals during the online learning session), the distance learning application of a user device of a participant in the online learning session may communicate (e.g., over a network) indicators of user actions and parameters of the user device to the server operating the participant engagement system. The participant engagement system determines engagement levels of the participants based on user actions and takes actions with participants, a presenter, or both to facilitate participant engagement, such as when one or more participants of the online learning session have engagement levels below a threshold engagement level.") adjusting the engagement score for the virtual event based on a context analysis (see para [0075]-[0076], " Context preparation module 144 is representative of functionality of the engagement module 142 configured to determine content, participants, or content and participants to increase an engagement level of a participant in an online session. Context preparation module 144 can determine content, participants, or content and participants to increase an engagement level of a participant in an online session in any suitable way. In one example, context preparation module 144 determines participants of an online session that correlate to one or more participants having an engagement level below a threshold engagement level determined by engagement module 142. For instance, context preparation module 144 may determine correlating participants based on a mapping provided by mapping module 138. Context preparation module 144 can determine content for increasing an engagement level of one or more low-engagement participants having an engagement level below a threshold engagement level based on the correlating participants. As an example, context preparation module 144 may identify from a mapping a participant that chatted with a low-engagement participant, and determine content to recommend to the low-engagement participant from a question asked by the participant who chatted with the low-engagement participant. Additionally or alternatively, context preparation module 144 can determine content, participants, or content and participants to increase an engagement level of a participant in an online session based on data structures that include indicators of user actions during the online session, such as data structures obtained by monitoring module 136 and used to generate a mapping by mapping module 138. For instance, indicators of user actions can be packaged in respective data structures for respective participants of an online session, such as by distance learning application 118, monitoring module 136, or mapping module 138. Each data structure can indicate an suitable data regarding a user action and the online session, such as a time interval of the learning session, content presented during the time interval, and a user action during the time interval.") Claims 8-9: Further, Jain discloses the following limitations: wherein the context analysis adjusts the engagement score for the virtual event based on the availability of a given virtual event platform touch point action during the virtual event (see para [0075]-[0076], where recommended content based on action of chat participation adjusts the engagement level) wherein the context analysis adjusts the engagement score for the virtual event based on a frequency of the set of virtual event platform touch point actions (see para [0051], table 1 showing frequency of certain actions such as rewinding content is a user action that impacts engagement score) Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Jain, Chan, and Sahasi, as applied above, and further in view of Kumbi et al. (US 2021/0209629 A1) (hereinafter Kumbi). Claim 7: Jain, Chan, and Sahasi do not specifically disclose wherein the context analysis adjusts the engagement score based on a comparison of a number of participants at the virtual event compared to a total number of individuals registered to attend the virtual event. In analogous art, Kumbi discloses the following limitations: wherein the context analysis adjusts the engagement score for the virtual event based on a comparison of a number of participants at the virtual event compared to a total number of individuals registered to attend the virtual event (see para [0038], "In accordance with embodiments herein, the application 110 facilitates predicting event outcomes. In particular, predicting an event outcome based on a predicted audience behavior related to an event along with real-time audience behavior related to the event. In embodiments, an event can be selected, for instance, by a user of application 110. A “user” can be a marketer, publisher, editor, author, or other person who employs the attendance optimization system to analyze events and view predicted event outcomes based on predicted audience behavior related to the event that is modified based on the actual registration behavior of an audience of the event. A user can designate an attendance goal for an event. Based on an audience invited to the event, an expected registration profile can be generated for the event that provides an indication of predicted audience behavior for the event. Such an expected registration profile can be based on a predicted pattern of registrations over time for the event. This expected registration profile used to analyze real-time audience behavior leading up to the event. In particular, the expected registration profile can be used to analyze the predicted audience behavior in light of real-time audience behavior.") It would have been obvious to one or ordinary skill in the art at the time of the invention to combine the teachings of Kumbi with Jain, Chan, and Sahasi because including a comparison of registered participants with those that attended provides additional analytics that enables more effective planning of events (see Kumbi, para [0001]-[0002]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the system for continuous updating of predicted event outcomes using real-time audience behavior as taught by Kumbi in the Jain, Chan, and Sahasi combination, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Jain, Chan and Sahasi, as applied above, and further in view of Castera et al. (US 2014/0122622 A1) (hereinafter Castera). Claims 10-11: Jain, Chan and Sahasi do not explicitly disclose adjusting the set of actions and their associated weightings for a subsequent virtual event based at least in part on the engagement score. In analogous art, Castera discloses the following limitations: adjusting the set of virtual event platform touch point actions and their associated weightings for a subsequent virtual event based at least in part on the engagement score for the virtual event (see para [0148], "the prediction model, using machine learning techniques, may monitor the engagement levels of published social media messages to adjust the weight values given to various message information attributes that are found in the social media messages that the prediction model is monitoring. This monitoring and adjustment of weight values of message information attributes by the prediction model may allow the prediction model to more accurately predict the engagement level of future social media messages. For example, the prediction model may find that a number of messages that contain the text "Click this" tend to draw engagement levels that are higher than messages with similar content but without the "Click this" text. Based on this information, the prediction model can then identify "Click this" message text as an attribute that tends to increase the engagement level of a message, and adjust the weight value of that attribute in calculating the engagement scores of future social media messages." where it would be obvious to one of ordinary skill in the art that virtual event weightings could be adjusted for subsequent instances can be adjusted in the same way as adjustments for weights in engagement score for subsequent social media messages) wherein the adjusting is carried out in an automated manner (see para [0148], where machine learning shows automated given broadest reasonable interpretation) It would have been obvious to one or ordinary skill in the art at the time of the invention to combine the teachings of Castera with Jain, Chan and Sahasi because adjusting weightings for subsequent events is effective in generating future engagement (see Castera, para [0020]-[0021]). Moreover, it would have been obvious to one of ordinary skill in the art at the time of the invention to include the method for providing near real-time predicted engagement level feedback to a user as taught by Castera in the Jain, Chan and Sahasi combination, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Daboll et al. (US 2011/0208585 A1), a system for measurement of engagement by receiving business objectives of a web site or online publisher on a server, tracking user frequency and user activities for a predetermine time, computing and ranking engagement scores with the web site based on the tracked user frequency as a function of user action categories for the predetermined time and business objectives, the user action categories being associated with the user activities, segmenting users based the engagement scores, and directing an advertisement to a user of at least one user segment THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SUJAY KONERU whose telephone number is (571)270-3409. The examiner can normally be reached M-F, 8:30 AM to 5 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached on 571- 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SUJAY KONERU/ Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Jul 27, 2023
Application Filed
Apr 21, 2025
Non-Final Rejection — §101, §103
Oct 24, 2025
Response Filed
Nov 03, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596979
PERSONALIZED RISK AND REWARD CRITERIA FOR WORKFORCE MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596972
CONVERSATION-BASED MESSAGING METHOD AND SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12585868
SYSTEM TO TRACE CHANGES IN A CONFIGURATION OF A SERVICE ORDER CODE FOR SERVICE FEATURES OF A TELECOMMUNICATIONS NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12579553
REUSABLE DATA SCIENCE MODEL ARCHITECTURES FOR RETAIL MERCHANDISING
2y 5m to grant Granted Mar 17, 2026
Patent 12572990
METHODS AND IoT SYSTEMS FOR MONITORING WELDING OF SMART GAS PIPELINE BASED ON GOVERNMENT SUPERVISION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
95%
With Interview (+37.0%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 722 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month