Prosecution Insights
Last updated: April 19, 2026
Application No. 17/810,902

SYSTEM AND METHOD FOR QUANTIFYING DIGITAL EXPERIENCES

Non-Final OA §101§103§112
Filed
Jul 06, 2022
Examiner
FORRISTALL, JOSHUA L
Art Unit
2857
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Nexthink SA
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
40 granted / 58 resolved
+1.0% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
45 currently pending
Career history
103
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
9.0%
-31.0% vs TC avg
§112
22.1%
-17.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 58 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/08/2025 has been entered. Response to Arguments Applicant's arguments see Remarks, filed 08/08/2025, with respect to the rejection(s) of claim(s) 1, 11, and 20 under 35 U.S.C. 101 have been fully considered but they are not persuasive. Modifying the IT environment is not an abstract idea but it is not significantly more than the abstract idea. Modifying an IT environment can be seen as insignificant activity as it could amount to outputting something on a computer screen as it is not further limited in the claim or in the specification. Outputting is using a computer as a tool. Applicant’s arguments, see Remarks, filed 08/08/2025, with respect to the rejection(s) of claim(s) 1, 11, and 20 under 35 U.S.C. 103 have been fully considered and are persuasive in light of the amendments. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of D. Sharma (US 20200274784 A1) , A. Sharma (US 20220230118 A1), and Malleshaiah (US 20220278889 A1). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-22 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 11, and 20 contain he limitation “comparing the endpoint data, the application data, and the collaboration data against a predefined hierarchy of data types, data types being associated with a job description” is not taught in the specification or any of the previous claim sets. Firstly, the specification only teaches comparing the endpoint data, application data, and collaboration data with a respective predetermined metric threshold for each group as seen in Para. [0042] not with a predefined data hierarchy. Furthermore, it appears that endpoint data, application data, and collaboration data are within the defined data hierarchy as seen in Fig. 3 and Para. [0028] which defines them as subfactors of the technology performance factor of the employee experience. Lastly, Para. [0024] teaches “In other cases, the combined digital experience level 118 may represent the experiences of individuals from a common department, individuals with a common job, individuals with a common type of laptop, individuals that are individually using the same software on their respective machines, individuals trying to function in a common period of time, individuals collocated in a common geographic area, individuals trying to access a common database, etc.” However, this does not show datatypes that are associated with a job description. Nor does it show a hierarchy of data types for the experiences of the different categories of individuals. Claims that depend on the above rejected claims are also rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-22 rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. the following bolded limitations are considered abstract: “receiving, at a computer system from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user and an IT environment, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; comparing the endpoint data, the application data, and the collaboration data against a predefined hierarchy of data types, data types within the predefined hierarchy of data types being associated with different job descriptions of individuals, wherein each data type with the hierarchy of datatypes has a positive interaction weight, a neutral interaction weight, and a negative interaction weight, resulting in, for each of the plurality of periods of time: (1) a weighted endpoint experience level, (2) a weighted application experience level, and (3) a weighted collaboration experience level; calculating, via at least one processor of the computer system for each period of time within the plurality of periods of time, a period of time specific level of experience based on the weighted endpoint data, the weighted application experience level, the weighted collaboration experience level, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; wherein the period of time-specific level of experience of the user is determined using a metric-specific threshold customized to the user; computing, via the at least one processor, a cumulative experience score for each data type in the predefined hierarchy of data types over a selected timeframe by combining experience levels associated with the selected timeframe and within the plurality of experience levels, resulting in cumulative experience scores for each data type,” wherein the receiving of the technology performance data occurs via continuous reporting across the network as the technology performance data is generated by the at least one client device; and transmitting the plurality of experience levels and the cumulative experience scores for each data type to an Information Technology team; and modifying the IT environment for at the least one client device based on at least one of the plurality of experience levels and the cumulative experience scores for each data type. With respect to claim 1, and similarly claims 11 and 20 the abstract limitations are directed to abstract ideas and would fall within the “Mathematical Concept” and “Mental Process” grouping of abstract ideas. Calculating a level of experience from a variety of data and computing a cumulative experience score for an elected time frame are both mathematical processes as both calculating and computing values are a mathematical concept see specification Para. [0035]. According to MPEP 2106.04(C) “A claim that recites a mathematical calculation, when the claim is given its broadest reasonable interpretation in light of the specification, will be considered as falling within the "mathematical concepts" grouping. A mathematical calculation is a mathematical operation (such as multiplication) or an act of calculating using mathematical methods to determine a variable or number, e.g., performing an arithmetic operation such as exponentiation. There is no particular word or set of words that indicates a claim recites a mathematical calculation. That is, a claim does not have to recite the word "calculating" in order to be considered a mathematical calculation. For example, a step of "determining" a variable or number using mathematical methods or "performing" a mathematical operation may also be considered mathematical calculations when the broadest reasonable interpretation of the claim in light of the specification encompasses a mathematical calculation.” Comparing data and assigning weights to data types amount to a mental process as they can be completed in the human mind using observation judgement and opinion. This judicial exception is not integrated into a practical application. In particular, the claim recites the additional elements – “receiving, at a computer system from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user and an IT environment, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; at least one processor; wherein the receiving of the technology performance data occurs via continuous reporting across the network as the technology performance data is generated by the at least one client device; and transmitting the plurality of experience levels and the cumulative experience scores for each data type to an Information Technology team and modifying the IT environment for at the least one client device based on at least one of the plurality of experience levels and the cumulative experience scores for each data type.” Examiner views these limitations amount to generally linking the use of the judicial exception to a particular technological environment or field of use – see MPEP 2106.05(h) As such Examiner does NOT view that the claims -Improve the functioning of a computer, or to any other technology or technical field -Apply the judicial exception with, or by use of, a particular machine - see MPEP 2106.05(b) -Effect a transformation or reduction of a particular article to a different state or thing - see MPEP 2106.05(c) -Apply or use the judicial exception in some other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, such that the claim as a whole is more than a drafting effort designed to monopolize the exception - see MPEP 2106.05(e) and Vanda Memo. Moreover, Examiner views the claims to be merely generally linking the use of the judicial exception to a computer system and generic computer data. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “receiving, at a computer system from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user and an IT environment, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; at least one processor; wherein the receiving of the technology performance data occurs via continuous reporting across the network as the technology performance data is generated by the at least one client device; and transmitting the plurality of experience levels and the cumulative experience scores for each data type to an Information Technology team and modifying the IT environment for at the least one client device based on at least one of the plurality of experience levels and the cumulative experience scores for each data type” amounts to a generic computer system receiving data regarding generic applications and user computer systems then transmitting information to a team of people which is a well-known process. Furthermore, receiving data continuously can be classified as pre-solution activity as it just limits how the data is received by the generic computing device and also represents mere data gathering. Furthermore, modifying an IT environment can be seen as insignificant activity as it could amount to outputting something on a computer screen as it is not further limited in the claim or in the specification. Outputting is using a computer as a tool. Examiner further notes that such additional elements are viewed to be well known routine and conventional as evidenced by D. Sharma (US 20200274784 A1) A. Sharma (US 20220230118 A1) The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Considering the claim as a whole, one of ordinary skill in the art would not know the practical application of the present invention since the claims do not apply or use the judicial exception in some meaningful way. As currently claimed, Examiner views that the additional elements do not apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, because the claim fails to recite clearly how the judicial exception is applied in a manner that does not monopolize the exception because the limitations “receiving, at a computer system from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user and an IT environment, the technology performance data comprising: endpoint data identifying operational aspects of the at least one client device; application data identifying operational aspects of at least one application executed by the at least one client device; and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; at least one processor; wherein the receiving of the technology performance data occurs via continuous reporting across the network as the technology performance data is generated by the at least one client device; and transmitting the plurality of experience levels and the cumulative experience score to an Information Technology team” just ties the claim to a well-known computer system and does not impose a meaningful limitation describing what problem is being remedied or solved. Dependent claims 2-10, 12-19, and 21-22 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation(s) fail(s) to establish that the claims are not directed to an abstract idea, as detailed below: The dependent claims are directed to further limit what data is being received or used to calculate the user experience which is a mental process. Therefore, dependent claims 2-10 and 12-19 and 21-22 further limit the abstract idea with an abstract idea and thus the claims are still directed to an abstract idea without significantly more. Claims 1, 2, 4, 5, 7-12, 14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over D. Sharma (US 20200274784 A1) as modified by A. Sharma (US 20220230118 A1) and Malleshaiah (US 20220278889 A1). Regarding claims 1, 11, and 20, D. Sharma teaches, receiving, at a computer system from at least one client device via a network, technology performance data of the at least one client device over a plurality of periods of time, the at least one client device being associated with a user and an IT (information technology) environment, the technology performance data comprising: (Abstract teaches “Systems and methods for analyzing digital user experience include performing inline monitoring of network access between one or more users each with an associated user device executing an agent application, the Internet, and one or more cloud applications and private applications accessible via lightweight connectors; based on user experience metrics collected by the inline monitoring and stored in a logging analysis system, obtaining user experience metrics for one or more users for a given time epoch and for a given application; and providing a graphical user interface displaying data related to various user experience scores for various users over various time epochs” (i.e. a time epoch is viewed as a period of time, and user device is client device.) endpoint data identifying operational aspects of the at least one client device; (Para. [0005] teaches “obtaining device and application metrics for the user from the associated user device related to usage of specific application;”) application data identifying operational aspects of at least one application executed by the at least one client device; (Para. [0005] teaches “obtaining device and application metrics for the user from the associated user device related to usage of specific application;”) and collaboration data identifying at least one of a collaboration software program executed by the at least one client device, (Para. [0032] teaches “each of the processing nodes 110 may include a decision system, e.g., data inspection engines that operate on a content item, e.g., a web page, a file, an email message, or some other data or data communication that is sent from or requested by one of the external systems.” (i.e. data communication seen as collaboration data.) wherein the endpoint data, the application data, and the collaboration data comprise time metadata and IT environment metadata identifying when and in which context an event took place; (Para. [0106] teaches “The metrics can be tagged with metadata (user, time, app, etc.) and sent to the logging and analytics 804 service for aggregation, analysis and reporting.” (i.e. metadata is seen as environment metadata.) comparing the endpoint data, the application data, and the collaboration data against a predefined hierarchy of data types, (Para. [0111] teaches “Scores can be aggregated for a group of users (e.g. department, location) or for the whole organization. Administrators are provided UEX score reports over time based on user, department, locations, etc. via a Graphical User Interface (GUI).” (i.e. “user, department, locations, etc. are seen as data types in the hierarchy of data types)) computing, via the at least one processor, a cumulative experience score for each data type in the predefined hierarchy of data types over a selected timeframe by combining experience levels associated with the selected timeframe and within the plurality of experience levels, resulting in cumulative experience scores for each data type,” (Para. [0111] teaches “The UEX score captures the digital experience and can be based on a given application with associated device, application, and network-related metrics. For example, the UEX score can be determined based on some weighted combination of the device, application, and network-related metrics for a given application and the UEX score can be normalized within a range, e.g., 0 to 100. Again, the given application can be a core business critical application where UEX is important (e.g., Office365, Salesforce, Internal Inventory app, etc.) or any other designated application. The UEX scores can be determined at fixed time epochs (e.g., 15 minute increments, hour increments, etc.) and normalized. Scores can be aggregated for a group of users (e.g. department, location) or for the whole organization. Administrators are provided UEX score reports over time based on user, department, locations, etc. via a Graphical User Interface (GUI).” (i.e. fixed time epochs are seen as a selected timeframe. Furthermore, user, department, locations, etc. are seen as data types in the hierarchy of data types) wherein the receiving of the technology performance data occurs via continuous reporting across the network as the technology performance data is generated by the at least one client device; (Para. [0028] teaches “Also, these components perform inline processing, enabling a real-time collection of data for the digital experience monitoring platform. Advantageously, by leveraging existing infrastructure, the digital experience monitoring platform provides real-time data which can be used for remediation and requires no additional equipment.” Para. [0088] “The cloud system 800 brings aspects of FIGS. 1-8 into a single architecture that is leveraged by the systems and methods to provide real-time, continuous digital experience monitoring, as opposed to conventional approaches.”) and transmitting the plurality of experience levels and the cumulative experience score to an Information Technology team. (Para. [0111] teaches “Administrators are provided UEX score reports over time based on user, department, locations, etc. via a Graphical User Interface (GUI).”) and modifying the IT environment for at the least one client device based on at least one of the plurality of experience levels and the cumulative experience scores for each data type. (Para. [0113] teaches “displaying an alert responsive to any user, group of users, location, and organization's user experience score falling below a threshold for a particular time epoch. The process 850 can further include aggregating the user experience for users into groups of users, locations, and organizations, and providing a graphical user interface displaying data related to the groups of users, the locations, and the organizations.”) D. Sharma does not explicitly teach, calculating, via at least one processor of the computer system for each period of time within the plurality of periods of time, a period of time specific level of experience based on the weighted endpoint data, the weighted application experience level, the weighted collaboration experience level, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; wherein the period of time-specific level of experience of the user is determined using a metric-specific threshold customized to the user; Nevertheless A. Sharma teaches, wherein each data type with the hierarchy of datatypes has a positive interaction weight, a neutral interaction weight, and a negative interaction weight, (Fig. 6 shows good, poor, and neutral weight) resulting in, for each of the plurality of periods of time: (1) a weighted endpoint experience level, (2) a weighted application experience level, and (3) a weighted collaboration experience level; (Fig. 6 shows desktop apps, mobile apps, and device health windows. Where desktop apps are seen as application experience. Mobile apps are seen as collaborative experience and device health windows are seen as endpoint experience. Fig. 7 shows them weighted over time.) calculating, via at least one processor of the computer system for each period of time within the plurality of periods of time, a period of time specific level of experience based on the weighted endpoint data, the weighted application experience level, the weighted collaboration experience level, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; (Para. [0005] teaches “The second application data can include user experience information for the desktop version of the application. The management server can determine a desktop user experience score based on the second application data. The desktop user experience score can be specific to the desktop user device, specific to a user associated with the desktop user device, or an aggregate score based on application data from multiple desktop user devices.” Para. [0009] teaches “The management server can determine a device health user experience score based on the device health information. The device health user experience score can be specific to a user device, specific to all user devices associated with a user, or an aggregate score based on device health information from multiple user devices.” Para. [0088] teaches “FIG. 7 illustrates an example score graph 700 of user experience scores over time. The y-axis 710 represents user experience scores as “poor,” “neutral,” or “good,” and the x-axis 720 represents the time of day. For example, the score graph 700 in FIG. 7 shows a user experience score over the past 24 hours.” (i.e. poor, neutral, and good are seen as the weight)) wherein the period of time-specific level of experience of the user is determined using a metric-specific threshold customized to the user; (Para. [0009] teaches “For example, the data for each metric can be compared to thresholds for different scores,” Para. [0055] “In one example, the management server 150 can determine a mobile user experience score for each user of the mobile application” (i.e. customized to a user.)) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify D. Sharma with calculating, via at least one processor of the computer system for each period of time within the plurality of periods of time, a period of time specific level of experience based on the weighted endpoint data, the weighted application experience level, the weighted collaboration experience level, and the time and IT environment metadata, resulting in a plurality of experience levels, the plurality of experience levels respectively corresponding to the periods of time within the plurality of periods of time; wherein the period of time-specific level of experience of the user is determined using a metric-specific threshold customized to the user such as that of A. Sharma. One of ordinary skill would have been motivated to modify D. Sharma, because according to Para. [0002] A. Sharma “Current methods for determining the quality of a user experience with software applications exclude important factors that affect a user's experience. For example, some user experience metrics consider performance data like errors, hang times, and crashes, but ignore how the health of the user's device affects the user's experience and vice versa. Some metrics fail to consider a desktop and a mobile version of an application separately. All of these methods fail to consider that a low user experience on any metric can cause an overall negative user experience.” The method of A. Sharma represents a comprehensive user experience score and therefore it would be obvious to use it to modify D. Sharma to more accurately capture user experience. The combination of D. Sharma and A Sharma does not explicitly teach, data types within the predefined hierarchy of data types being associated with different job descriptions of individuals, Malleshaiah teaches, data types within the predefined hierarchy of data types being associated with different job descriptions of individuals, (Para. [0224] teaches “That is, there can be various baselines, e.g., a specific user baseline, a location baseline, a baseline based on organization, based on job function, etc. Each baseline is for a specific metric and shows what that metric should be for a good UX score. There can be multiple types of bases—global baselines (across all users 102), per user per device baselines, etc.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma and A. Sharma with data types within the predefined hierarchy of data types being associated with different job descriptions of individuals such as that of Malleshaiah. One of ordinary skill would have been motivated to modify the combination of D. Sharma and A. Sharma, because D. Sharma calculates a UX score using different but similar data and each metric can determine different issues that the user has with the system as seen in the above cited Para. [0224] . Furthermore Para. [0038] of Malleshaiah “Advantageously, by leveraging existing infrastructure, the digital experience monitoring platform provides real-time data which can be used for remediation and requires no additional equipment. For example, the digital experience monitoring platform can enable an intelligent path selection in real-time for a user. Thus, the digital experience monitoring platform is proactive, not reactive.” Regarding claim 2, D. Sharma further teaches, the method of claim 1, wherein the plurality of periods of time comprise at least one of consecutive minutes, consecutive hours, and consecutive days. (Para. [0111] further teaches “The UEX scores can be determined at fixed time epochs (e.g., 15 minute increments, hour increments, etc.)”) Regarding claim 4, D. Sharma further teaches, the method of claim 1, wherein the plurality of experience levels and the cumulative experience score are computed for a single individual user. (Para. [0112] teaches “determining a user experience score for the one or more users for the given time epoch and for the given application based on the obtained user experience metrics,”) Regarding claim 5, D. Sharma further teaches, the method of claim 1, wherein the plurality of experience levels and the cumulative experience score are computed for a plurality of users. (Para. [0112] teaches “based on user experience metrics collected by the inline monitoring and stored in a logging analysis system, obtaining user experience metrics for one or more users for a given time epoch and for a given application (step 852)”) Regarding claim 7, D. Sharma further teaches, the method of claim 1, wherein the calculating of the level of experience further comprises: comparing each metric within the endpoint data, the application data, and the collaboration data to at least one respective predetermined metric-specific threshold, resulting in metric comparisons; (Para. [0115 teaches “The points can be allocated based on where the user falls within a percentile threshold (e.g., p80), p100 being the worst UEX. Metrics can be weighted, e.g., Latency=4 pts., % CPU=1 pts. For an application and location, calculate average score based on users that are using the application at the location. The overall score is computed based on average UEX score across all users. For example, in the score card below on scale of 0 (best)-10(worst), John's score is 2.5 (or 75/100)” Para. [0113] teaches “The process 850 can further include generating and displaying an alert responsive to any user, group of users, location, and organization's user experience score falling below a threshold for a particular time epoch.”) identifying, based on the metric comparisons, metric-specific experience levels; (Para. [0134] teaches “The UEX score metrics can be aggregated by geographic location and application to highlight problems based on default or pre-configured thresholds and metric trends (e.g., 90th percentile, mean) to provide an ability to share or save an interactive snapshot of the problem as part of a service escalation.”) D. Sharma does not explicitly teach, and assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels. Nevertheless A. Sharma teaches, and assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels. (Para. [0048] teaches “In an example, the overall user experience score can be categorical, such as one of “Poor,” “Neutral,” and “Good.” For example, the management server 150 can compare the lowest of the three user experience scores to categorical thresholds, and the overall user experience score can be the category that the lowest score falls into”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah with assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels such as that of A. Sharma. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because the lowest score would ensure a more user biased rating ensuring better customer service in that any negative experience would be more readily addressed. Regarding claim 8, D. Sharma does not explicitly teach, the method of claim 5, wherein the at least one respective predetermined metric-specific threshold is customized to a user. Nevertheless A. Sharma teaches, wherein the at least one respective predetermined metric-specific threshold is customized to a user. (Para. [0009] teaches “For example, the data for each metric can be compared to thresholds for different scores,” Para. [0055] “In one example, the management server 150 can determine a mobile user experience score for each user of the mobile application”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify D. Sharma wherein the at least one respective predetermined metric-specific threshold is customized to a user such as that of A. Sharma. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because each user would use unique applications on unique systems. Therefore, to improve accuracy of the users experience the metrics used should be customized for each user. Regarding claim 9, D. Sharma further teaches, the method of claim 5, wherein the at least one respective predetermined metric-specific threshold is dynamically adjusted for a group of users based on historical data. (Para. [0096] teaches “The cloud system 800 can enable real-time performance and behaviors for troubleshooting in the current state of the environment, historical performance and behaviors to understand what occurred or what is trending over time, predictive behaviors by leveraging analytics technologies to distill and create actionable items from the large dataset collected across the various data sources, and the like.” Para. [0111] teaches “Scores can be aggregated for a group of users (e.g. department, location) or for the whole organization.”) Regarding claim 10, D. Sharma does not explicitly teach, the method of claim 1, wherein the plurality of experience levels are selected from categories comprising positive, negative, and neutral. Nevertheless A. Sharma further teaches, wherein the plurality of experience levels are selected from categories comprising positive, negative, and neutral. (Para. [0009] teaches “For example, the data for each metric can be compared to thresholds for different scores, such as “poor,” “neutral, and “good.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah wherein the plurality of experience levels are selected from categories comprising positive, negative, and neutral such as that of A. Sharma. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because utilizing the same threshold for different classifications would not be useful in classifying the user experience score. Therefore, one would be motivated to classify the data into different groups to increase organization and clarity of the user experience. Regarding claim 12, D. Sharma further teaches, the system of claim 11, wherein the plurality of periods of time comprise at least one of consecutive minutes, consecutive hours, and consecutive days. (Para. [0111] further teaches “The UEX scores can be determined at fixed time epochs (e.g., 15 minute increments, hour increments, etc.)”) Regarding claim 14, D. Sharma further teaches, the system of claim 11, wherein the plurality of experience levels and the cumulative experience score are computed for a plurality of users. (Para. [0112] teaches “based on user experience metrics collected by the inline monitoring and stored in a logging analysis system, obtaining user experience metrics for one or more users for a given time epoch and for a given application (step 852)”) Regarding claim 16, D. Sharma further teaches, the system of claim 11, wherein the calculating of the level of experience further comprises: comparing each metric within the endpoint data, the application data, and the collaboration data to at least one respective predetermined metric-specific threshold, resulting in metric comparisons; (Para. [0115 teaches “The points can be allocated based on where the user falls within a percentile threshold (e.g., p80), p100 being the worst UEX. Metrics can be weighted, e.g., Latency=4 pts., % CPU=1 pts. For an application and location, calculate average score based on users that are using the application at the location. The overall score is computed based on average UEX score across all users. For example, in the score card below on scale of 0 (best)-10(worst), John's score is 2.5 (or 75/100)” Para. [0113] teaches “The process 850 can further include generating and displaying an alert responsive to any user, group of users, location, and organization's user experience score falling below a threshold for a particular time epoch.”) identifying, based on the metric comparisons, metric-specific experience levels; (Para. [0134] teaches “The UEX score metrics can be aggregated by geographic location and application to highlight problems based on default or pre-configured thresholds and metric trends (e.g., 90th percentile, mean) to provide an ability to share or save an interactive snapshot of the problem as part of a service escalation.”) D. Sharma does not explicitly teach, and assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels. Nevertheless A. Sharma teaches, and assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels. (Para. [0048] teaches “In an example, the overall user experience score can be categorical, such as one of “Poor,” “Neutral,” and “Good.” For example, the management server 150 can compare the lowest of the three user experience scores to categorical thresholds, and the overall user experience score can be the category that the lowest score falls into”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah with assigning the level of experience for each period of time based on a lowest ranked level of experience within the metric-specific experience levels such as that of A. Sharma. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because the lowest score would ensure a more user biased rating ensuring better customer service in that any negative experience would be more readily addressed. Regarding claim 17, D. Sharma does not explicitly teach, the system of claim 16, wherein the at least one respective predetermined metric-specific threshold is customized to a user. Nevertheless A. Sharma teaches, wherein the at least one respective predetermined metric-specific threshold is customized to a user. (Para. [0009] teaches “For example, the data for each metric can be compared to thresholds for different scores,” Para. [0055] “In one example, the management server 150 can determine a mobile user experience score for each user of the mobile application”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah wherein the at least one respective predetermined metric-specific threshold is customized to a user. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because each user would use unique applications on unique systems. Therefore, to improve accuracy of the users experience the metrics used should be customized for each user. Regarding claim 18, D. Sharma further teaches, the system of claim 16, wherein the at least one respective predetermined metric-specific threshold is dynamically adjusted for a group of users based on historical data. (Para. [0096] teaches “The cloud system 800 can enable real-time performance and behaviors for troubleshooting in the current state of the environment, historical performance and behaviors to understand what occurred or what is trending over time, predictive behaviors by leveraging analytics technologies to distill and create actionable items from the large dataset collected across the various data sources, and the like.” Para. [0111] teaches “Scores can be aggregated for a group of users (e.g. department, location) or for the whole organization.”) Regarding claim 19, D. Sharma does not explicitly teach, the system of claim 11, wherein the plurality of experience levels are selected from categories comprising positive, negative, and neutral. Nevertheless A. Sharma further teaches, wherein the plurality of experience levels are selected from categories comprising positive, negative, and neutral. (Para. [0009] teaches “For example, the data for each metric can be compared to thresholds for different scores, such as “poor,” “neutral, and “good.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah wherein the plurality of experience levels are selected from categories comprising positive, negative, and neutral such as that of A. Sharma. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because utilizing the same threshold for different classifications would not be useful in classifying the user experience score. Therefore, one would be motivated to classify the data into different groups to increase organization and clarity of the user experience. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over D. Sharma (US 20200274784 A1) as modified by A. Sharma (US 20220230118 A1) and Malleshaiah (US 20220278889 A1) as applied to claims 1 and 11 above, and further in view of Singh (US 20230205595 A1). Regarding claim 3, The combination of D. Sharma, A. Sharma, and Malleshaiah does not explicitly teach, the method of claim 1, wherein the plurality of periods of time comprise non-consecutive periods of time. Singh teaches, wherein the plurality of periods of time comprise non-consecutive periods of time. (Para. [0094] teaches “The multiple time intervals may be consecutive or non-consecutive. For example, the session analyzer 332 can flag sessions 320 with UX scores failing the threshold for at least 5 consecutive time intervals. In another example, the session analyzer 332 can flag sessions 320 with UX scores failing the threshold for 4 out of 10 time intervals.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah wherein the plurality of periods of time comprise non-consecutive periods of time such as that of Singh. One of ordinary skill would have been motivated to modify the combination of the combination of D. Sharma, A. Sharma, and Malleshaiah, because if the UX score fails to hit the threshold non-consecutively it could still mean that the user had a poor experience and therefore the method should be updated to more accurately reflect the experience of the user. Regarding claim 13, the combination of D. Sharma, A. Sharma, and Malleshaiah does not explicitly teach, the system of claim 11, wherein the plurality of periods of time comprise non-consecutive periods of time. Nevertheless, Singh teaches, wherein the plurality of periods of time comprise non-consecutive periods of time. (Para. [0094] teaches “The multiple time intervals may be consecutive or non-consecutive. For example, the session analyzer 332 can flag sessions 320 with UX scores failing the threshold for at least 5 consecutive time intervals. In another example, the session analyzer 332 can flag sessions 320 with UX scores failing the threshold for 4 out of 10 time intervals.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah wherein the plurality of periods of time comprise non-consecutive periods of time such as that of Singh. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because if the UX score fails to hit the threshold non-consecutively it could still mean that the user had a poor experience and therefore the method should be updated to more accurately reflect the experience of the user. Claims 6 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over D. Sharma (US 20200274784 A1) as modified by A. Sharma (US 20220230118 A1) and Malleshaiah (US 20220278889 A1) as applied to claims 5 and 14 above, and further in view of Reynolds (US 20210373858 A1). Regarding claim 6, The combination of D. Sharma and A. Sharma does not explicitly teach, the method of claim 5, wherein the plurality of users use distinct computer operating systems. Nevertheless, Reynolds teaches, wherein the plurality of users use distinct computer operating systems. (Para. [0042] teaches “Edge logic 208 can cleanse data by normalizing data labels and data sets. Edge logic 208 can receive captured data labels with different names across operating system platforms and can have difference decimal places. Edge logic 208 normalizes these values to enable computer-based evaluations easier.”) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, wherein the plurality of users use distinct computer operating systems such as that of Reynolds. One of ordinary skill would have been motivated to modify the combination of D. Sharma, A. Sharma, and Malleshaiah, because across an organizations network users may be using different operating systems and therefore to aggregate user experience a method such as that of Reynolds would be needed to normalize values and determine overall user experience. Regarding claim 15, the combination of D. Sharma, A. Sharma, and Malleshaiah does not explicitly teach, the system of claim 14, wherein the plurality of users use distinct computer operating systems. Nevertheless, Reynolds teaches, wherein the plurality of users use distinct computer operating systems. (Para. [0042] teaches “Edge logic 208 can cleanse data by normalizing data labels and data sets. Edge logic 208 can receive captured data labels with different names across operating system platforms and can have difference decimal places. Edge logic 208 normalizes these values to enable computer-based evaluations easier.”) It would have been obvious to one of ordinary skill in the art before the
Read full office action

Prosecution Timeline

Jul 06, 2022
Application Filed
Sep 05, 2024
Non-Final Rejection — §101, §103, §112
Jan 17, 2025
Response Filed
Apr 04, 2025
Final Rejection — §101, §103, §112
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Examiner Interview Summary
Aug 08, 2025
Request for Continued Examination
Aug 12, 2025
Response after Non-Final Action
Nov 29, 2025
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572161
METHOD AND CONTROL DEVICE FOR CONTROLLING A ROTATIONAL SPEED
2y 5m to grant Granted Mar 10, 2026
Patent 12546581
CAPACITIVE DETECTION OF FOLD ANGLE FOR FOLDABLE DEVICES
2y 5m to grant Granted Feb 10, 2026
Patent 12516599
MONITORING CORROSION IN DOWNHOLE EQUIPMENT
2y 5m to grant Granted Jan 06, 2026
Patent 12481043
SYSTEMS AND TECHNIQUES FOR DEICING SENSORS
2y 5m to grant Granted Nov 25, 2025
Patent 12455392
METHOD TO CORRECT VSP DATA
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
92%
With Interview (+23.4%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 58 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month