Prosecution Insights
Last updated: April 19, 2026
Application No. 18/381,476

METHODS FOR GENERATING INSIGHT DATA AND RECOMMENDATIONS FROM ANALYTICAL DATA AND DEVICES THEREOF

Non-Final OA §101§102§103
Filed
Oct 18, 2023
Examiner
DIVELBISS, MATTHEW H
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Jones Lang Lasalle Ip Inc.
OA Round
3 (Non-Final)
23%
Grant Probability
At Risk
3-4
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
83 granted / 367 resolved
-29.4% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
50 currently pending
Career history
417
Total Applications
across all art units

Statute-Specific Performance

§101
37.0%
-3.0% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/27/26 has been entered, in which Applicant amended claims 1, 7, 8, 13, 14, and 20. Claims 1-20 are pending in this application and have been rejected below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement No Information Disclosure Statement (IDS) has been submitted on behalf of this case. Accordingly, the examiner has not considered an IDS. Response to Amendment Applicant’s amendments are acknowledged. The 35 USC 101 rejections of claims 1-20 regarding abstract ideas are maintained in light of Applicant’s amendments and explanations. Revised 35 USC 102 and 103 rejections of claims 1-20 are applied in light of Applicant’s amendments and explanations. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for receiving and analyzing/filtering input data to determine a better dashboard for presenting data. Examiner formulates an abstract idea analysis, following the framework described in the MPEP, as follows: Step 1: The claims are directed to a statutory category, namely a "method" (claims 1-7) and "system" (claims 8-20). Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1: A method comprising: identifying… one or more of a plurality of stored dashboards based on a selection, by a user, from available metrics and contextual data in a received data processing request … wherein the one or more of the plurality of stored dashboards are identified using a stored mapping between the plurality of stored dashboards and the available metrics; retrieving… dashboard configuration data associated with the identified ones of the plurality of stored dashboards based on the selection from the available metrics and the contextual data; generating… manipulated data, from the dashboard configuration data based on the contextual data; filtering… the manipulated data based on one or more selected filters received… wherein the filtered manipulated data is analyzed to generate a set of insight data, wherein the generated set of insight data comprises recommended actions relating to the insight data outputting… a new dashboard based on the generated set of insight data as a response to the data processing request Independent claims 8 and 14 recite substantially similar claim language. Dependent claims 2-7, 9-13, and 15-20 recite the same or similar abstract idea(s) as independent claims 1, 8, and 14 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea. The limitations in claims 1-20 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of: "Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to receiving and analyzing/filtering input data to determine a better dashboard for presenting data and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or "Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including receiving and analyzing/filtering input data to determine a better dashboard for presenting data, which is capable of being performed mentally and/or using pen and paper. Step 2A - Prong 2: Claims 1-20 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of: "outputting, by the computing apparatus, a new dashboard based on the generated set of insight data as a response to the data processing request… modifying, by the computing apparatus, the interface to display the new dashboard with the filtered manipulated data and the generated set of insight data to consolidate data from the plurality of stored dashboards into a single interface" (claims 1, 8, and 14), however the aforementioned elements directed to the receiving of user input/selection of data to view via a dashboard and displaying corresponding data via the dashboard merely amount to generic GUI elements of a general purpose computer used to "apply" the abstract idea (MPEP 2106.05(f)) and/or is merely an attempt at limiting the abstract idea of receiving and analyzing/filtering input data to determine a better dashboard for presenting data to a particular field of use/technological environment of a GUI dashboard (MPEP 2106.05(h)) and therefore the GUI dashboard input and display of data fails to integrate the abstract idea into a practical application; " a computing apparatus / A non-transitory computer readable medium having stored thereon instructions comprising executable code, which when executed by at least one processor, cause the processor to: / An analytics apparatus, comprising memory comprising programmed instructions stored in the memory and processors configured to be capable of executing the programmed instructions stored in the memory to:" (claims 1, 8, and 14) however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "computing apparatus" is at most an attempt to limit the abstract idea to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application; " wherein generating the manipulated data based on the predetermined contractual obligation further comprises: receiving contractual textual data from the client device" (claims 5 and 17), however the receiving of data from these various sources is merely insignificant extra-solution activity, e.g. data gathering, and/or merely an attempt at limiting the abstract idea to a particular field of use and thus fails to integrate the recited abstract idea into a practical application (e.g. MPEP 2106.0S(h): "Examiners should keep in mind that this consideration overlaps with other considerations, particularly insignificant extra-solution activity (see MPEP § 2106.05{g)). For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017} (limiting use of abstract idea to use with XML tags)."); Step 2B: Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of data analysis using a "computing apparatus" to display a GUI "dashboard", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to determining impact of service cases on KPls for various levels of building components and visualizing the associated metrics according to user desired/selected levels of granularity. Claims 1-20 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more. Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis For further authority and guidance, see: MPEP § 2106 https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102(A)(2) that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (A)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6-16, and 18-20 are rejected under 35 U.S.C. 102(A)(2) as being anticipated by U.S. Patent Application Publication Number 2021/0241893 to Dunwoody et al. (hereafter referred to as Dunwoody). As per claim 1, Dunwoody teaches: A method comprising:… by a computing apparatus (Paragraph Number [0006] teaches a computer program product comprising a computer useable or readable medium having a computer readable program is provided. The computer readable program, when executed on a computing device, causes the computing device to perform various ones of, and combinations of, the operations outlined above with regard to the method illustrative embodiment. Paragraph Number [0025] teaches before beginning the discussion of the various aspects of the illustrative embodiments in more detail, it should first be appreciated that throughout this description the term “mechanism” will be used to refer to elements of the present invention that perform various operations, functions, and the like. A “mechanism,” as the term is used herein, may be an implementation of the functions or aspects of the illustrative embodiments in the form of an apparatus, a procedure, or a computer program product. In the case of a procedure, the procedure is implemented by one or more devices, apparatus, computers, data processing systems, or the like. In the case of a computer program product, the logic represented by computer code or instructions embodied in or on the computer program product is executed by one or more hardware devices in order to implement the functionality or perform the operations associated with the specific “mechanism.” (See also Paragraph Number [0030])). identifying, by a computing apparatus, one or more of a plurality of stored dashboards based on a selection, by a user, from available metrics and contextual data in a received data processing request from the user at a client device (Paragraph Number [0021] teaches a recommendation may then be output to the user via a portion of a dashboard, a separate dialog box, or any other suitable output mechanism, that indicates what information the illustrative embodiments believe the user is looking for and the one or more dashboards that provide that information. This output may further provide links or other user interface elements by which the user may select the dashboard that they would like to access and thereby be redirected to the recommended dashboard. Paragraph Number [0047] teaches the analytics systems 110 generate analytics results data which again, may be stored in the backend data stores 150, or in other storage (not shown) for utilization in providing output representations of this results data, e.g., graphical, textual, or audible outputs presenting the analytics results data for use by a user. The user experience systems 120 provide the logic for generating these output representations which, in accordance with the illustrative embodiments described herein, include one or more dashboards. A “dashboard” as the term is used herein refers to a graphical and/or textual user interface that provides a plurality of different representations of the same or different analytics results data in a single output, as previously discussed above. The particular dashboards that the user experience systems 120 may generate are pre-defined by dashboard creators and stored in a dashboard repository (not shown) associated with the user experience systems 120. The pre-defined dashboards are populated with specific data obtained from the analytics results data generated by the analytics systems 110 and stored in the backend data stores 150 or other storage, which is accessible via the backend data store interface system 140, and which may also provide the pre-processors and other logic for maintaining data in the backend data stores 150 and providing access to such stored data. Paragraph Number [0048] teaches the dashboard usage tracking and recommendation system 130 provides the logic for tracking user interactions with the various dashboards that are provided by the user experience systems 120 as well as interactions the users make both before and after interacting with the dashboards so as to generate a usage pattern for the user. The usage pattern is a representation of the types of actions performed by the user with the dashboard, lengths of time interacting with elements of the dashboard, actions taken prior to or subsequent to interacting with the dashboard such as with other user experience systems 120 including, but not limited to, search engines, instant messaging systems, and the like). wherein the one or more of the plurality of stored dashboards are identified using a stored mapping between the plurality of stored dashboards and the available metrics (Paragraph Number [0075] teaches the DUTAR system 350 may also analyze these metrics and present recommendations to the administrator as to modifications to the dashboard of interest. For example, as noted above, highest occurrences of particular terms or phrases may be matched to terms/phrases in metadata descriptions of pre-defined dashboards in the storage 345 to determine which pre-defined dashboard data structures present information related to those terms/phrases. Similarly, based on the determination of portions of other dashboards, or dashboards themselves, accessed by users more often than other dashboards or portions thereof, recommendations as to modifications to the dashboard of interest to include a link to or portions of the other dashboard may be generated. All of this information may be presented to an administrator so that they can determine what modifications, if any, to perform to a dashboard so as to improve the dashboard offerings provided in the predefined dashboards storage 345. (See also Example is Paragraph Number [0079] teaching the mapping of specific topics or information (metrics) to specific dashboards that are tied to that particular subject matter)). retrieving, by the computing apparatus, dashboard configuration data associated with the identified ones of the plurality of stored dashboards based on the selection from the available metrics and the contextual data (Paragraph Number [0017] teaches tracking dashboard usage and generating recommendations as to dashboards or portions of dashboards that may be of interest based on analysis of such usage. The mechanisms of the illustrative embodiments track usage patterns of dashboards and predict the information that a user is attempting to obtain, correlates this information with dashboard configuration information specifying the information that various dashboards provides, and selects one or more dashboards that provide information corresponding to the predicted information for recommending to the user. Paragraph Number [0051] teaches the dashboard usage tracking and recommendation system 130 may then generate a recommendation output that is output to the user and includes an indication of the information that the system 130 predicts the user is looking for as well as the recommended dashboard(s) for providing the information. Moreover, the output may include hyperlinks, graphical user interface elements, or the like, for allowing the user to select or otherwise specify a desire to go to a recommended dashboard and have it provided to the user). generating, by the computing apparatus, manipulated data, from the dashboard configuration data based on the contextual data (Paragraph Number [0051] teaches the dashboard usage tracking and recommendation system 130 may then generate a recommendation output that is output to the user and includes an indication of the information that the system 130 predicts the user is looking for as well as the recommended dashboard(s) for providing the information. Moreover, the output may include hyperlinks, graphical user interface elements, or the like, for allowing the user to select or otherwise specify a desire to go to a recommended dashboard and have it provided to the user. Paragraph Number [0052] teaches the dashboard usage tracking and recommendation system 130 may further provide usage tracking and analysis across multiple users in the same and/or different organizations and provide recommendations to system administrators and/or other dashboard creators as to the usage patterns observed across multiple users with regard to the pre-defined dashboards. For example, analytics may be executed on user interaction data tracked for each of the pre-defined dashboards to extract information about the way in which the users utilized the dashboards and the actions that they take both before and after interacting with the dashboards indicating the usefulness of the dashboard to the users' needs). filtering, by the computing apparatus, the manipulated data based on one or more selected filters received from the client device (Paragraph Number [0070] teaches the analytics data generated by the analytics engine(s) 330 may be provided, either from the backend data stores 320 or directly from the analytics engines 330, to the user experience system 340 which comprises logic for generating one or more dashboards 360 based on predefined dashboard data structures in predefined dashboards storage 345. The predefined dashboard data structures may specify templates, code, or utilized other mechanisms for defining the dashboards including the portions of the dashboards, the analytics data used to generate the portions of the dashboards, and the like. The user experience system 340 may, based on a request from a user via a client system 370 or administrator console 380, generate a corresponding dashboard that matches the request by retrieving the appropriate predefined dashboard data structure from the storage 345 and populating the portions of the dashboard 360 with analytics data generated by the analytics engines 330. The request may specify criteria for the analytics data to use when generating the dashboard 360, e.g., a time frame, patient characteristics data, geographic region, etc. which may thereby be used to filter the analytics data represented in the dashboard 360). wherein the filtered manipulated data is analyzed to generate a set of insight data (Paragraph Number [0022] teaches the mechanisms of the illustrative embodiments may log or otherwise store historical information about user usage patterns, as well as any recommended dashboards that the user actually selected based on the providing of the recommendation output to the user, for usage tracking and analysis across a plurality of users of the same and/or different organizations. For example, usage tracking and analysis information may identify commonalities between users that use a particular dashboard as to other dashboards accessed thereafter within a same user session, commonalities of search terms used in search queries after accessing the dashboard, commonalities of keywords included in instant messages sent after accessing a dashboard, or the like, which all point to users not obtaining the information they seek from the accessed dashboard and a commonality in the information that these users thought would be available in the dashboard. This provides insight into ways in which the accessed dashboard may be improved, a new dashboard that may be of use to users, or other modifications to the set of dashboards that are made available to users at the various organizations. Paragraph Number [0024] teaches a dashboard provider may provide the same or similar dashboards to multiple organizations. These organizations may customize these dashboards for their own personal use and may use them to access their own personal backend data stores. Tracking and analysis of usage patterns of users of the various organizations may provide insights into recommendations for how to modify, or add to, the dashboard offerings for a particular organization. In addition, looking at usage patterns across multiple similar organizations may identify areas where the dashboard offerings are not meeting the information needs of users in general, regardless of the particular organization, and provide insights into how all dashboard offerings to all organizations may be improved). wherein the generated set of insight data comprises recommended actions relating to the insight data (Paragraph Number [0054] teaches analyzing the frequency of occurrence of search terms searched by users prior to or following the user's interactions with the dashboard being analyzed may identify terms representing analytics data that the users are attempting to gain access to and may be a basis for correlating with the metadata indicating the types of analytics data represented in other dashboards. A similar approach may be performed using natural language processing of instant messages and other types of communications conducted by users via the user experience systems 120 to thereby identify terms and/or phrases that users use most frequently in combination with their interactions with a particular dashboard being analyzed. These terms/phrases may be used to correlate with metadata or other information describing the types of analytics data represented in the various dashboards. From this correlation, recommendations as to links to other dashboards and/or portions of other dashboards that may be included in the dashboard being analyzed may be generated and output to a dashboard creator and/or provider of dashboards as part of the analytics system 100). outputting, by the computing apparatus, a new dashboard based on the generated set of insight data as a response to the data processing request (Paragraph Number [0053] teaches differences between the information presented in dashboard B and dashboard A may be identified and a recommendation to modify dashboard A to include a link to dashboard B or to include portions of dashboard B in dashboard A may be generated. For example, the highest frequency of occurrences of interactions between a user and other dashboards, e.g., dashboard B, or other user experience systems 120, e.g., instant messaging system(s), search engine(s), etc., and/or portions thereof, that are associated with the dashboard being analyzed, e.g., dashboard A, may be utilized to generate recommendations as to additions or modifications to be made to the dashboard being analyzed. Paragraph Number [0054] teaches analyzing the frequency of occurrence of search terms searched by users prior to or following the user's interactions with the dashboard being analyzed may identify terms representing analytics data that the users are attempting to gain access to and may be a basis for correlating with the metadata indicating the types of analytics data represented in other dashboards). modifying, by the computing apparatus, the interface to display the new dashboard with the filtered manipulated data and the generated set of insight data to consolidate data from the plurality of stored dashboards into a single interface (Paragraph Number [0043] teaches the implementation of the mechanisms of the illustrative embodiments improves the functionality of the computing device and provides a useful and concrete result that facilitates providing guidance to users as to dashboards where predictive analytics determine that the information sought by the user may be obtained as well as providing recommendations, such as to system administrators or other providers of dashboards, as to dashboard modifications or new dashboards that could be generated to provide information that the user and/or other users of the same or different organizations, tend to seek when accessing pre-defined existing dashboards. Paragraph Number [0053] teaches differences between the information presented in dashboard B and dashboard A may be identified and a recommendation to modify dashboard A to include a link to dashboard B or to include portions of dashboard B in dashboard A may be generated. For example, the highest frequency of occurrences of interactions between a user and other dashboards, e.g., dashboard B, or other user experience systems 120, e.g., instant messaging system(s), search engine(s), etc., and/or portions thereof, that are associated with the dashboard being analyzed, e.g., dashboard A, may be utilized to generate recommendations as to additions or modifications to be made to the dashboard being analyzed. Paragraph Number [0081] teaches combination approaches may also be applied when generating recommendations, where these combination approaches combine similar user dashboard usage information with user behavior information to generate recommendations for a user. Moreover, recommendations made for one user may also be logged in the dashboard usage data 355 and/or as part of a user's profile in the user registry 365, and used as a basis for generating recommendations for other users having similar characteristics or similar dashboard usage behavior patterns). As per claim 8, Dunwoody teaches: A non-transitory computer readable medium having stored thereon instructions comprising executable code, which when executed by at least one processor, cause the processor to: (Paragraph Number [0030] teaches the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire). The remainder of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1. As per claim 14, Dunwoody teaches: An analytics apparatus, comprising memory comprising programmed instructions stored in the memory and processors configured to be capable of executing the programmed instructions stored in the memory to (Paragraph Number [0006] teaches a computer program product comprising a computer useable or readable medium having a computer readable program is provided. The computer readable program, when executed on a computing device, causes the computing device to perform various ones of, and combinations of, the operations outlined above with regard to the method illustrative embodiment. Paragraph Number [0025] teaches before beginning the discussion of the various aspects of the illustrative embodiments in more detail, it should first be appreciated that throughout this description the term “mechanism” will be used to refer to elements of the present invention that perform various operations, functions, and the like. A “mechanism,” as the term is used herein, may be an implementation of the functions or aspects of the illustrative embodiments in the form of an apparatus, a procedure, or a computer program product. In the case of a procedure, the procedure is implemented by one or more devices, apparatus, computers, data processing systems, or the like. In the case of a computer program product, the logic represented by computer code or instructions embodied in or on the computer program product is executed by one or more hardware devices in order to implement the functionality or perform the operations associated with the specific “mechanism.” (See also Paragraph Number [0030])). The remainder of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1. As per claims 2, 9, and 15, Dunwoody teaches each of the limitations of claims 1, 8, and 14 respectively. In addition, Dunwoody teaches: wherein the manipulated data is generated based on: a time period trend, a predetermined threshold, a predetermined goal, historical usage data, or a predetermined contractual obligation. (Paragraph Number [0022] teaches the mechanisms of the illustrative embodiments may log or otherwise store historical information about user usage patterns, as well as any recommended dashboards that the user actually selected based on the providing of the recommendation output to the user, for usage tracking and analysis across a plurality of users of the same and/or different organizations. For example, usage tracking and analysis information may identify commonalities between users that use a particular dashboard as to other dashboards accessed thereafter within a same user session, commonalities of search terms used in search queries after accessing the dashboard, commonalities of keywords included in instant messages sent after accessing a dashboard, or the like, which all point to users not obtaining the information they seek from the accessed dashboard and a commonality in the information that these users thought would be available in the dashboard. This provides insight into ways in which the accessed dashboard may be improved, a new dashboard that may be of use to users, or other modifications to the set of dashboards that are made available to users at the various organizations. Paragraph Number [0024] teaches a dashboard provider may provide the same or similar dashboards to multiple organizations. These organizations may customize these dashboards for their own personal use and may use them to access their own personal backend data stores. Tracking and analysis of usage patterns of users of the various organizations may provide insights into recommendations for how to modify, or add to, the dashboard offerings for a particular organization. In addition, looking at usage patterns across multiple similar organizations may identify areas where the dashboard offerings are not meeting the information needs of users in general, regardless of the particular organization, and provide insights into how all dashboard offerings to all organizations may be improved. (this teaches at least the option of historical usage data)). As per claims 3, 10, and 18, Dunwoody teaches each of the limitations of claims 1 and 2, 8 and 9, and 14 and 15 respectively. In addition, Dunwoody teaches: wherein generating manipulated data based on the predetermined threshold further comprises: retrieving benchmark data from a database (Paragraph Number [0079] teaches recommendations may be constructed based on the user's behavior as it relates to their interactions with other user experience systems and, in some cases, the subject matter of the content that the user accessed via these other user experience systems. For example, a recommendation may be constructed based on a user's behavior as it relates to their consumption of whitepapers versus time spent on specific dashboards. If a user has read multiple whitepapers related to prescription medication cost drivers, for example, and has spent a significant amount of time interacting with a prescription medication cost dashboard, as may be indicated by the dashboard usage data 355 in comparison to one or more threshold values defining a “significant” amount of time, the DUTAR system 350 may recommend other dashboards tied to the cost of prescription medications). determining the predetermined threshold, wherein the predetermined threshold is received from the client device or generated based on benchmark data (Paragraph Number [0079] teaches recommendations may be constructed based on the user's behavior as it relates to their interactions with other user experience systems and, in some cases, the subject matter of the content that the user accessed via these other user experience systems. For example, a recommendation may be constructed based on a user's behavior as it relates to their consumption of whitepapers versus time spent on specific dashboards. If a user has read multiple whitepapers related to prescription medication cost drivers, for example, and has spent a significant amount of time interacting with a prescription medication cost dashboard, as may be indicated by the dashboard usage data 355 in comparison to one or more threshold values defining a “significant” amount of time, the DUTAR system 350 may recommend other dashboards tied to the cost of prescription medications). As per claims 4, 11, and 16, Dunwoody teaches each of the limitations of claims 1 and 2, 8 and 9, and 14 and 15 respectively. In addition, Dunwoody teaches: wherein generating the manipulated data based on the time period trend further comprising: receiving a time period and interval from a client (Paragraph Number [0079] teaches recommendations may be constructed based on the user's behavior as it relates to their interactions with other user experience systems and, in some cases, the subject matter of the content that the user accessed via these other user experience systems. For example, a recommendation may be constructed based on a user's behavior as it relates to their consumption of whitepapers versus time spent on specific dashboards. If a user has read multiple whitepapers related to prescription medication cost drivers, for example, and has spent a significant amount of time interacting with a prescription medication cost dashboard, as may be indicated by the dashboard usage data 355 in comparison to one or more threshold values defining a “significant” amount of time, the DUTAR system 350 may recommend other dashboards tied to the cost of prescription medications). generating a graphical representation of the dashboard configuration data as the new dashboard (Paragraph Number [0047] teaches the analytics systems 110 generate analytics results data which again, may be stored in the backend data stores 150, or in other storage (not shown) for utilization in providing output representations of this results data, e.g., graphical, textual, or audible outputs presenting the analytics results data for use by a user. The user experience systems 120 provide the logic for generating these output representations which, in accordance with the illustrative embodiments described herein, include one or more dashboards. A “dashboard” as the term is used herein refers to a graphical and/or textual user interface that provides a plurality of different representations of the same or different analytics results data in a single output, as previously discussed above). manipulating the graphical representation based on the time period and interval (Paragraph Number [0053] teaches differences between the information presented in dashboard B and dashboard A may be identified and a recommendation to modify dashboard A to include a link to dashboard B or to include portions of dashboard B in dashboard A may be generated. For example, the highest frequency of occurrences of interactions between a user and other dashboards, e.g., dashboard B, or other user experience systems 120, e.g., instant messaging system(s), search engine(s), etc., and/or portions thereof, that are associated with the dashboard being analyzed, e.g., dashboard A, may be utilized to generate recommendations as to additions or modifications to be made to the dashboard being analyzed. For example, based on the portions of another dashboard manipulated by user input subsequent to the dashboard being analyzed, e.g., dashboard A, and correlating information about those portions that indicates the type of analytics data represented in those portions as well as the way in which that analytics data is represented in the dashboard, a recommendation that similar types of analytics data should be included in the dashboard being analyzed and a recommendation as to the way in which that analytics data may be represented in the dashboard may be provided). As per claims 6, 12, and 19, Dunwoody teaches each of the limitations of claims 1, 8, and 14. In addition, Dunwoody teaches: wherein the computing apparatus receives the one or more selected filters after sending a plurality of recommended filters (Paragraph Number [0072] teaches the DUTAR system 350 provides logic for listening to the user inputs to dashboards and/or other user experience systems 365, such as search engines, instant messaging systems, and the like, to log and/or record user inputs both before, during, and after interacting with a dashboard of interest. The DUTAR system 350 may utilize agents 375, i.e. portions of code designed to log and transmit information about user interactions with user experience systems, deployed on client systems to obtain the user input information tracking the user's input patterns. In this way, the user's input patterns may be identified before, during, and after a dashboard is provided to the user which may then be indicative of the type of data the user is attempting to access. This information may be utilized by predictive analytics logic 357 of the DUTAR system 350 to predict what type of data or information the user is attempting to access and generate a recommendation for the user. Moreover, the input patterns of the user may be accumulated with other input pattern data for the particular dashboard and stored in the dashboard usage data storage 355 for presentation of dashboard usage metrics information and/or recommendations as to how to improve dashboard offerings via administrator consoles 380. Paragraph Number [0073] teaches the recommendations for the user during a user session with a dashboard 360 may be generated based on the predictive analytics of predictive analytics logic 357 in the DUTAR system 350 and may be presented to the user via their client system 370 either as a separate recommendation or as a recommendation integrated with a presented dashboard 360). wherein the plurality of recommended filters is generated based on account information (Paragraph Number [0073] teaches the recommendations for the user during a user session with a dashboard 360 may be generated based on the predictive analytics of predictive analytics logic 357 in the DUTAR system 350 and may be presented to the user via their client system 370 either as a separate recommendation or as a recommendation integrated with a presented dashboard 360. Paragraph Number [0074] teaches the dashboard usage data 355 stores cumulative usage metric information for each predefined dashboard in the storage 345 indicating various characteristics of user inputs associated with the predefined dashboard which may also be correlated with particular user characteristics information stored in the user registry 356, such as may be part of a user profile data structure or the like. For example, the metrics information may indicate the number of users that have input certain search terms into search engines before or after using the dashboard, metrics of how many times users accessed portions of the particular dashboard, metrics of how many times users accessed each of one or more other pre-defined dashboards either before or after accessing the dashboard of interest, metrics of the number of times users used certain terms or phrases during instant messaging within a predefined time period before or after accessing the dashboard of interest, or the like. These metrics may be presented to an administrator via one or more administrator consoles 380. (See also Paragraph Number [0072])). wherein the account information is preselected by a client or received prior to the generation of the plurality of recommended filters (Paragraph Number [0074] teaches the dashboard usage data 355 stores cumulative usage metric information for each predefined dashboard in the storage 345 indicating various characteristics of user inputs associated with the predefined dashboard which may also be correlated with particular user characteristics information stored in the user registry 356, such as may be part of a user profile data structure or the like. For example, the metrics information may indicate the number of users that have input certain search terms into search engines before or after using the dashboard, metrics of how many times users accessed portions of the particular dashboard, metrics of how many times users accessed each of one or more other pre-defined dashboards either before or after accessing the dashboard of interest, metrics of the number of times users used certain terms or phrases during instant messaging within a predefined time period before or after accessing the dashboard of interest, or the like. These metrics may be presented to an administrator via one or more administrator consoles 380). As per claims 7, 13, and 20, Dunwoody teaches each of the limitations of claims 1, 8, and 14 respectively. In addition, Dunwoody teaches: wherein the generated set of insight data comprises recommended actions (Paragraph Number [0054] teaches analyzing the frequency of occurrence of search terms searched by users prior to or following the user's interactions with the dashboard being analyzed may identify terms representing analytics data that the users are attempting to gain access to and may be a basis for correlating with the metadata indicating the types of analytics data represented in other dashboards. A similar approach may be performed using natural language processing of instant messages and other types of communications conducted by users via the user experience systems 120 to thereby identify terms and/or phrases that users use most frequently in combination with their interactions with a particular dashboard being analyzed. These terms/phrases may be used to correlate with metadata or other information describing the types of analytics data represented in the various dashboards. From this correlation, recommendations as to links to other dashboards and/or portions of other dashboards that may be included in the dashboard being analyzed may be generated and output to a dashboard creator and/or provider of dashboards as part of the analytics system 100). wherein the recommended actions are generated based on normalized historical actions from a database (Paragraph Number [0022] teaches the mechanisms of the illustrative embodiments may log or otherwise store historical information about user usage patterns, as well as any recommended dashboards that the user actually selected based on the providing of the recommendation output to the user, for usage tracking and analysis across a plurality of users of the same and/or different organizations. For example, usage tracking and analysis information may identify commonalities between users that use a particular dashboard as to other dashboards accessed thereafter within a same user session, commonalities of search terms used in search queries after accessing the dashboard, commonalities of keywords included in instant messages sent after accessing a dashboard, or the like, which all point to users not obtaining the information they seek from the accessed dashboard and a commonality in the information that these users thought would be available in the dashboard. This provides insight into ways in which the accessed dashboard may be improved, a new dashboard that may be of use to users, or other modifications to the set of dashboards that are made available to users at the various organizations). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 5 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2021/0241893 to Dunwoody et al. (hereafter referred to as Dunwoody) in view of U.S. Patent Application Publication Number 2022/0261711 to Krishna et al. (hereafter referred to as Krishna). As per claims 5 and 17, Dunwoody teaches each of the limitations of claims 1 and 2, and 14 and 15 respectively. Dunwoody teaches receiving and analyzing/filtering input data to determine a better dashboard for presenting data but does not explicitly teach where the data gathered is related to contracts and contractual obligations gathered through OCR as described by the following citations from Krishna: wherein generating the manipulated data based on the predetermined contractual obligation further comprises: receiving contractual textual data from the client device (Paragraph Number [0047] teaches when inputs 730 (i.e., contract clause text and descriptions) are submitted to a first stage 702 of the specificity model 700, an input embedding step can pull data from repository 720 along with inputs 730 for processing by a first multi-head attention mechanism and forward feeding to a second stage 704 of specificity model 700. The second stage 704 includes two input ends, where output labels (identifying the category types that the clause may be assigned) are provided for input embedding, masked multi-head attention, and then collected with the inputs 730 fed by the first stage 702 for additional processing by a second multi-head attention mechanism. The transformed data is linearized and run through a softmax classifier, thereby classifying each clause under a specific category as appropriate, or noting the clause has no basis for a specificity classification as defined by the system). determining the predetermined contractual obligation based on the contractual textual data (Paragraph Number [0048] teaches a user submits a query regarding a particular contract (e.g., “What are the warranty clauses?”; “What are the liability terms?”; “What are the indemnity terms?”, etc.). The system performs an indexing-based search in a second step 754, re-ranks and title matches the data in a third step 756, and then submits the processed data to the MRC Engine in a fourth step 758. The specificity model receives the clause information in a fifth step 760, and generates an output comprising the contract clauses classified under specific categories associated with the query (e.g., unfavorable terms, missing protection, unlimited clauses, etc.). in some embodiments, the model is also configured to calculate a level or percentage representing the magnitude of risk associated with the given clause in that category). wherein the contractual textual data is generated by using optical character recognition to extract data from a received contract from the client device (Paragraph Number [0041] teaches the system includes a specialized optical character recognition (OCR) engine to generate digitized documents. In some implementations, the OCR engine may include an OmniPage OCR engine, a Google Cloud Vision API OCR engine, Microsoft Azure Computer Vision API OCR engine, an IBM Bluemix OCR engine, and/or the like. In some implementations, the OCR engine may convert the documents into an electronic format (e.g., the digitized documents). Optical character recognition involves a conversion of images of typed, handwritten, or printed text into machine-encoded text. For example, OCR may be applied to a scanned document, a photo of a document, a photo of a scene that includes text, and/or the like, to produce electronic data (e.g., text data). OCR can be used as a form of information entry from printed paper data records (e.g., printed forms, printed tables, printed reports, identification documents, invoices, bank statements, and/or the like). Converting printed text to electronic data allows the information represented by the printed text to be electronically edited, searched, stored more compactly, displayed online, and/or used in machine processes such as cognitive computing, machine translation, (extracted) text-to-speech, key data and text mining, and/or the like. Implementations of OCR may employ pattern recognition, artificial intelligence, computer vision, and/or the like). Both Dunwoody and Krishna are directed to data gathering and analysis to determine what information to pass to an information display. Dunwoody discloses receiving and analyzing/filtering input data to determine a better dashboard for presenting data. Krishna improves upon Dunwoody by disclosing where the data gathered is related to contracts and contractual obligations gathered through OCR. One of ordinary skill in the art would be motivated to further include where the data gathered is related to contracts and contractual obligations gathered through OCR, to efficiently parse, correlate, and present information to a user regarding physical documents as well as user queries. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of receiving and analyzing/filtering input data to determine a better dashboard for presenting data in Dunwoody to further utilize where the data gathered is related to contracts and contractual obligations gathered through OCR as disclosed in Krishna, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Response to Arguments Applicant’s arguments filed 1/27/2026 have been fully considered but they are not persuasive. Applicant argues that the claims do not recite an abstract idea. (See Applicant’s Remarks, 1/27/2026, pgs. 8-14). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations or to further implement abstract concepts that further organize human activities (i.e. humans completing tasks). The claims do not recite any particular use for these determinations that improve upon the underlying computer technology. Instead, Examiner asserts that the claim language is only used as implementation of the abstract concepts utilizing technology. The claims are not directed towards the technology, but are instead directed towards the overarching abstract concepts and in this way is generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims nor do the claims recite significantly more than the underlying abstract concepts. Applicant argues that the newly amended claim language is not taught by the combination of cited references. See Applicant’s Remarks, 1/27/2026, pgs. 15-17). Examiner respectfully disagrees and notes that Applicant’s arguments are moot in that new citations / explanations from the Dunwoody reference have been applied to the newly amended claim language. In response to Applicant’s assertions, Examiner directs Applicant to review the revised 35 USC 102 rejection presented above. In response to Applicant’s specific assertions that Dunwoody does not teach the limitation “wherein the generated set of insight data comprises recommended actions relating to the insight data,” Examiner respectfully disagrees. Dunwoody teaches analyzing the frequency of occurrence of search terms searched by users prior to or following the user's interactions with the dashboard being analyzed may identify terms representing analytics data that the users are attempting to gain access to and may be a basis for correlating with the metadata indicating the types of analytics data represented in other dashboards. A similar approach may be performed using natural language processing of instant messages and other types of communications conducted by users via the user experience systems 120 to thereby identify terms and/or phrases that users use most frequently in combination with their interactions with a particular dashboard being analyzed. These terms/phrases may be used to correlate with metadata or other information describing the types of analytics data represented in the various dashboards. From this correlation, recommendations as to links to other dashboards and/or portions of other dashboards that may be included in the dashboard being analyzed may be generated and output to a dashboard creator and/or provider of dashboards as part of the analytics system 100. (See Paragraph Number [0054]) This paragraph teaches that recommended actions (such as the selection of specific dashboard elements) can be performed in response to data and metadata that is gathered about their usage of their personal dashboard (i.e. insight data). Thus, the Dunwoody reference teaches gathering insight data in response to a data analysis algorithm or filter and then using that data to provide recommendations to a user of what additional dashboard components should be added to their own personal dashboard. As such, Examiner asserts that the Dunwoody reference teaches “wherein the generated set of insight data comprises recommended actions relating to the insight data.” Examiner is not persuaded by the distinctions Applicant is attempting to make. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H. DIVELBISS whose telephone number is (571) 270-0166. The fax phone number is 571-483-7110. The examiner can normally be reached on M-Th, 7:00 - 5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /M.H.D/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
Jun 20, 2025
Non-Final Rejection — §101, §102, §103
Sep 25, 2025
Response Filed
Oct 21, 2025
Final Rejection — §101, §102, §103
Jan 27, 2026
Request for Continued Examination
Feb 20, 2026
Response after Non-Final Action
Mar 01, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572889
Optimization of Large-scale Industrial Value Chains
2y 5m to grant Granted Mar 10, 2026
Patent 12503000
OPTIMIZATION PROCEDURE FOR THE ENERGY MANAGEMENT OF A SOLAR ENERGY INSTALLATION WITH STORAGE MEANS IN COMBINATION WITH THE CHARGING OF AN ELECTRIC VEHICLE AND SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12493860
WASTE MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Dec 09, 2025
Patent 12482011
FAMILIARITY DEGREE ESTIMATION APPARATUS, FAMILIARITY DEGREE ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Nov 25, 2025
Patent 12450574
METHOD FOR WASTE MANAGEMENT UTILIZING ARTIFICAL NEURAL NETWORK SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
23%
Grant Probability
46%
With Interview (+23.4%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month