DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application is being examined under the pre-AIA first to invent provisions.
Status of Application
This Communication is a Final Office Action in response to the Arguments, Remarks, and Amendments filed on the 24th day of November, 2025. Currently Claims 1-20 are pending. No claims are allowed.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/24/2025 was filed in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claims 1-20 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1-20 of copending Application No. 18/142,531 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims are identical but for slight tweaks in data descriptions, the function is the same.
This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) with no practical application and without significantly more.
Under MPEP 2106, when considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (step 1). If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (step 2A prong 1), and if so, it must additionally be determined whether the claim is integrated into a practical application (step 2A prong 2). If an abstract idea is present in the claim without integration into a practical application, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself (step 2B).
In the instant case, claims 1-20 are directed to a method and system. Thus, each of the claims falls within one of the four statutory categories (step 1).
Under Step 2A Prong 1, the test is to identify whether the claims are “directed to” a judicial exception. Examiner notes that the claimed invention is directed to an abstract idea in that the instant application is directed to certain methods of organizing human activity specifically commercial interactions and behaviors and managing personal behavior and/or interactions between people (see MPEP 2106.04(a)(2)(II)) and mental processes (see MPEP 2106.04(a)(2)(III).
Claim 1 and 10. A method and system comprising: identifying a first entity having first intellectual-property (IP) assets; generating a graphical user interface (GUI) configured to display on a computing device, the GUI configured to: display one or more second entities having second intellectual-property assets that are similar to one or more of the first IP assets; and receive an input from the computing device; receiving, via the GUI, input data representing the input, the input data indicating selection of at least a second entity of the one or more second entities as selected entities; generating a machine learning model configured to generate clusters of IP assets; receiving feedback data indicating performance of the machine learning model; generating a training dataset utilizing a transformed version of the feedback data; training the machine learning model utilizing the training dataset such that a trained machine learning model is generated; generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model, first data representing one or more result sets, wherein individual ones of the one or more result sets include one or more clusters of the second IP assets, wherein individual ones of the one or more clusters include at least: second data indicating a coverage metric associated with individual ones of the second IP assets; third data indicating an opportunity metric associated with individual ones of the second IP assets; and fourth data indicating an exposure metric associated with individual ones of the second IP assets; and causing the GUI to display a first result set of the one or more result sets.
Claim 18. A method comprising: identifying a first entity having first intellectual property (IP) assets; generating a graphical user interface (GUI) configured to display on a computing device, the GUI configured to: display second entities having second IP assets that are similar to one or more of the first IP assets; and receive an input from the computing device; receiving, via the GUI, input data representing the input, the input data indicating selection of at least one of the second entities as selected entities; generating a machine learning model configured to generate clusters of IP assets; receiving feedback data indicating performance of the machine learning model; generating a training dataset utilizing a transformed version of the feedback data; training the machine learning model utilizing the training dataset such that a trained machine learning model is generated; generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model, first data representing a first result set including at least: a first cluster including a first portion of the second IP assets associated with the selected entities; a first metric associated with the second IP assets included in the first portion; a second cluster including a second portion of the second IP assets associated with the selected entities; and a second metric associated with the second IP assets included in the second portion; and causing the GUI to display the first result set.
Examiner notes that the claimed invention is directed to concepts that are performed mentally and a product of human mental work. Because the limitations above closely follow the steps of processing information related to market information associated to quantified technology, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the see MPEP 2106.04(a)(2)(III).
Alternatively, Examiner notes that claims 1-20 recite a processing and displaying information related to intellectual property assets, and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as organizing human activity of fundamental economic activities, legal/business interactions and risk management (i.e., technology/asset analyzing, valuing, processing, and protecting). This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in intellectual property management. This is common practice when IP creation, production, prosecution, litigation, and management of IP assets and technology. Because the limitations above closely follow the steps standard in interactions between people and businesses such as technology/asset analyzing, valuing, processing, and protecting), and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II).
Furthermore, the claims recite the familiar concept of property valuation. As the Supreme Court explained in Alice, claims involving “a fundamental economic practice long prevalent in our system of commerce,” such as the concepts of hedging and inter-mediated settlement, are patent-ineligible abstract ideas. Alice, 134 S. Ct. at 2356 (quoting Bilski v. Kappos, 561 U.S. 593, 611 (2010)). It follows that the claims at issue here are directed to an abstract idea. Applicants’ claims recite one or more computers configured to receive a user’s property valuations, and display that information. Like the risk hedging in Bilski and the concept of intermediated settlement in Alice, the concept of property valuation, that is, determining a property’s market value, is “a fundamental economic practice long prevalent in our system of commerce.” Id. (quoting Bilski, 561 U.S. at 611). Prospective sellers and buyers have long valued property and doing so is necessary to the functioning of the residential real estate market. As such, claim 1 is directed to the abstract idea of intellectual property valuation and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as fundamental economic practices. This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in intellectual property creation, valuation, protection, and management. This is common practice when creating and protecting any intellectual property. Because the limitations above closely follow the steps standard in fundamental economic practices such as process valuation, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II).
Examiner notes that the claims now contain language directed to “generating a machine learning model configured to generate clusters of IP assets; receiving feedback data indicating performance of the machine learning model; generating a training dataset utilizing a transformed version of the feedback data; training the machine learning model utilizing the training dataset such that a trained machine learning model is generated; generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model”, which amounts to, under the broadest reasonable interpretation, the system requires specific mathematical calculations (training the algorithm using feedback and model information). “In some cases, a machine learning (ML) component 256 may be configured to train one or more ML model(s) using machine-learning mechanisms. For example, a machine- learning mechanism can analyze historical data 208,market data 226, and/or any other type of data stored or otherwise accessible by the data store 126, associated with one or more entities, technology spaces, and/or markets, configured as training data to train a data model that creates an output, which can be a recommendation, a score, a respective probability, a threshold probability, and/or another indication. Machine-learning mechanisms can include, but are not limited to supervised learning algorithms (e.g., artificial neural networks, Bayesian statistics, support vector machines, decision trees, classifiers, k-nearest neighbor, etc.), unsupervised learning algorithms (e.g., artificial neural networks, association rule learning, hierarchical clustering, cluster analysis, etc.), semi-supervised learning algorithms, deep learning algorithms, etc.), statistical models, etc.” (see at least Specification ¶ 143) and therefore encompasses mathematical concepts. “For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A, Prong One to make the analysis clear on the record.” MPEP 2106.04, subsection II.B. Under such circumstances, however, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Here, the claimed invention falls within the mental process/certain method of organizing human activity grouping of abstract ideas, and the steps fall within the mathematical concepts grouping of abstract ideas. The limitations are considered together as a single abstract idea for further analysis. (Step 2A, Prong One: YES).
The conclusion that the claim recites an abstract idea within the groupings of the MPEP 2106.04(a)(2) remains grounded in the broadest reasonable interpretation consistent with the description of the invention in the specification. For example, [App. Spec 3], “an intellectual-property landscaping platform”. Accordingly, the Examiner submits claims 1-20, recite an abstract idea based on the language identified in claim 1, 10, and 18, and the abstract ideas previously identified based on that language that remains consistent with the groupings of Step 2A Prong 1 of the MPEP 2106.04(a)(1).
If the claims are directed toward the judicial exception of an abstract idea, it must then be determined under Step 2A Prong 2 whether the judicial exception is integrated into a practical application. Examiner notes that considerations under Step 2A Prong 2 comprise most the consideration previously evaluated in the context of Step 2B. The Examiner submits that the considerations discussed previously determined that the claim does not recite “significantly more” at Step 2B would be evaluated the same under Step 2A Prong 1 and result in the determination that the claim does not integrate the abstract idea into a practical application.
The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to a method instructing the reader to implement the identified method of organizing human activity of fundamental economic activities, legal/business interactions and risk management (i.e., technology/asset analyzing, valuing, processing, and protecting) on generically claimed computer structure. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as “interface”, “processor”, “trained machine learning model”, and “memory” recited at a high level of generality. These elements do not themselves amount to an improvement to the interface or computer, to a technology or another technical field. This is consistent with Applicant’s disclosure which states that the computing device “The user interface(s) 114 may be configured to display information associated with the IP landscaping platform and to receive user input associated with the IP landscaping platform. [00100] The remote computing resources 104 may include one or more components such as, for example, one or more processors 116, one or more network interfaces 118, and/or computer-readable media 120. The computer-readable media 120 may include one or more components, such as, for example, a landscaping component 122, an scoring component 124, and/or one or more data store(s) 126.”. (App. Spec. 99).
Accordingly, the claimed “system” read in light of the specification employs any wide range of possible devices comprising a number of components that are “well-known” and included in an indiscriminate “interface”, “trained machine learning model”, “processor”, and “memory” (e.g., processing device, modules). Thus, the claimed structure amounts to appending generic computer elements to abstract idea comprising the body of the claim. The computing elements are only involved at a general, high level, and do not have the particular role within any of the functions but to be a computer-implemented method using a generically claimed “interface”, “trained machine learning model”, “processor”, and “memory” and even basic, generic recitations that imply use of the computer such as storing information via servers would add little if anything to the abstract idea.
Similarly, reciting the abstract idea as software functions used to program a generic computer is not significant or meaningful: generic computers are programmed with software to perform various functions every day. A programmed generic computer is not a particular machine and by itself does not amount to an inventive concept because, as discussed in MPEP 2106.05(a), adding the words “apply it” (or an equivalent) with the judicial exception, or more instructions to implement an abstract idea on a computer, as discussed in Alice, 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)), is not enough to integrate the exception into a practical application. Further, it is not relevant that a human may perform a task differently from a computer. It is necessarily true that a human might apply an abstract idea in a different manner from a computer. What matters is the application, “stating an abstract idea while adding the words ‘apply it with a computer’” will not render an abstract idea non-abstract. Tranxition v. Lenovo, Nos. 2015-1907, -1941, -1958 (Fed. Cir. Nov. 16, 2016), slip op. at 7-8.
Here, the instructions entirely comprise the abstract idea, leaving little if any aspects of the claim for further consideration under Step 2A Prong 2. In short, the role of the generic computing elements recited in claim 1, 10, and 18, is the same as the role of the computer in the claims considered by the Supreme Court in Alice, and the claim as whole amounts merely to an instruction to apply the abstract idea on the generic computerised system. Therefore, the claims have failed to integrate a practical application (2106.04(d)). Under the MPEP 2106.05, this supports the conclusion that the claim is directed to an abstract idea, and the analysis proceeds to Step 2B.
While many considerations in Step 2A need not be reevaluated in Step 2B because the outcome will be the same. Here, on the basis of the additional elements other than the abstract idea, considered individually and in combination as discussed above, the Examiner respectfully submits that the claims 1, 10, and 18, do not contain any additional elements that individually or as an ordered combination amount to an inventive concept and the claims are ineligible.
With respect to the dependent claims do not recite anything that is found to render the abstract idea as being transformed into a patent eligible invention. The dependent claims are merely reciting further embellishments of the abstract idea and do not claim anything that amounts to significantly more than the abstract idea itself.
Claims 2-9, 11-17, and 19-20 are directed to further embellishments of the abstract idea in that they are directed to aspects of the intellectual property information processing which is the central theme of the abstract idea identified above, as well as being directed to data processing and transmission which the courts have recognized as insignificant extra-solution activities (see at least M.P.E.P. 2106.05(g)). Data transmission is one of the most basic and fundamental uses there are for a generic computing device is not sufficient to amount to significantly more. The examiner takes the position that simply appending the judicial exception with such a well understood step of data transmission is not going to amount to significantly more than the abstract idea.
Therefore, since there are no limitations in the claim that transform the abstract idea into a patent eligible application such that the claim amounts to significantly more than the abstract idea itself, the claims are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. See MPEP 2106.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-20 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 11232383 B1 to Burns et al. (hereinafter Burns) in view of U.S. Patent Application Publication No. 10685310 B1 to McCuiston et al. (hereinafter McCuiston).
Referring to Claim 1 and 10 (substantially similar in scope and language), Burns discloses a method comprising:
identifying a first entity having first intellectual-property (IP) assets; and generating a graphical user interface (GUI) configured to display on a computing device (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3), the GUI configured to:
display one or more second entities having second intellectual-property assets that are similar to one or more of the first IP assets (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development); and
receive an input from the computing device (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3);
receiving, via the GUI, input data representing the input, the input data indicating selection of at least a second entity of the one or more second entities as selected entities (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development);
generating a machine learning model configured to generate clusters of IP assets (see also Burns: Col. 28 Line 1-13, Col. 33 Line 5-16, Col. 39 Line 23-56, Col. 42 Line 6-25, Col. 50 Line 20-28, Col. 68 Line 10-65, and Col. 72 Line 1-28: all discussing the utilization of machine learning algorithms that are used to score the technology cluster/group/classification);
receiving feedback data indicating performance of the machine learning model (further addressed below);
generating a training dataset utilizing a transformed version of the feedback data (further addressed below);
training the machine learning model utilizing the training dataset such that a trained machine learning model is generated (further addressed below);
generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model (further addressed below);
generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model, first data representing one or more result sets, wherein individual ones of the one or more result sets include one or more clusters of the second IP assets, wherein individual ones of the one or more clusters include at least: second data indicating a coverage metric associated with individual ones of the second IP assets; third data indicating an opportunity metric associated with individual ones of the second IP assets; and fourth data indicating an exposure metric associated with individual ones of the second IP assets (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development; see also Burns: Col. 28 Line 1-13, Col. 33 Line 5-16, Col. 39 Line 23-56, Col. 42 Line 6-25, Col. 50 Line 20-28, Col. 68 Line 10-65, and Col. 72 Line 1-28: all discussing the utilization of machine learning algorithms that are used to score the technology); and
causing the GUI to display a first result set of the one or more result sets (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Burns specifically discusses the system training the model (see at least Burns: Col. 28 Lin 1-12 “The system further including a plurality of backend data enhancement and management decision aids. The system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system. The system is further configured for automatic quantitative assessment of opportunities and risks based on a plurality of confidence factors”; see also Burns: Col. 33 Line 10-17; see also Burns: Col. 39 Line 39-54 ).
Burns further discloses utilizing “predictive analytics techniques including, but not limited to, machine learning (ML), artificial intelligence (AI), neural networks (NNs) (e.g., long short term memory (LSTM) neural networks), deep learning, historical data, and/or data mining to make future predictions and/or models. The predictive analytics includes business planning, technical risks, secondary and tertiary effects of trade-offs, model course of action (COA) in real time, real-time market trends, and the cross track impacts of these elements. The TCF platform is preferably operable to recommend and/or perform actions based on historical data, external data sources, ML, AI, NNs, and/or other learning techniques” (see at least Burns: Col. 81 Line 35-48; see also Burns: Col. 79 Line 45-65).
Burns does not go into the specific details about using the model feedback information to train the model assessing the assets.
However, McCuiston, teaches a method for risk management and compliance associated with products, teaches it is known to:
receiving feedback data indicating performance of the machine learning model (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50);
generating a training dataset utilizing a transformed version of the feedback data (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50);
training the machine learning model utilizing the training dataset such that a trained machine learning model is generated (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50);
generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50)
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes (as disclosed by McCuiston) into the system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system (as disclosed by Burns). One of ordinary skill in the art would have been motivated to incorporate the feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes because it would generate one or more complexity levels, risk data, or recommendations associated with the potential product (see McCuiston: Col. 1 Line 35-42).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes (as disclosed by McCuiston) into the system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system (as disclosed by Burns), because the claimed invention is merely a simple arrangement of old elements, with each performing the same function it had been known to perform, yielding no more than one would expect from such arrangement. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by adding the well-known feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes into the system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system). See also MPEP § 2143(I)(A).
Referring to Claim 2 and 11 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 1 and system of claim 11, including wherein the second data includes at least one of: geographical data; breadth data; expiration data; diversity data; revenue alignment data; or invalidity data (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development)).
Referring to Claim 3 and 12 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 1 and system of claim 11, including wherein the third data includes at least one of: filing velocity data; spending data; predictive analytics data; or precedence data (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development; see also Burns: Col. 81 Line 33-56).
Referring to Claim 4 and 13 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 1 and system of claim 11, including wherein the fourth data includes at least one of: litigation data; market data; or revenue alignment data (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 5 and 14 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 1 and system of claim 11, including further comprising generating, based at least in part on the coverage metric, the opportunity metric, and the exposure metric, a comprehensive metric, the comprehensive metric including a combination of the coverage metric, the opportunity metric, and the exposure metric (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 6 and 15 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 1 and system of claim 11, including further comprising displaying at least one of the coverage metric, the opportunity metric, the exposure metric, and a comprehensive metric in response to a selection of at least one IP asset displayed on a first portion of the GUI (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 7 and 16 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 6 and system of claim 15, including wherein displaying the at least one of the coverage metric, the opportunity metric, the exposure metric, and the comprehensive metric in response to the selection of the at least one IP asset displayed on the first portion of the GUI comprises displaying an information overly window within the first portion of the GUI displaying the one or more clusters (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 8 and 17 (substantially similar in scope and language), the combination of Burns and McCuiston teaches the method of claim 7 and system of claim 16, including wherein displaying the at least one of the coverage metric, the opportunity metric, the exposure metric, and the comprehensive metric in response to the selection of the at least one IP asset displayed on the GUI comprises displaying an information overly window on a second portion of the GUI next to the first portion of the GUI displaying the one or more clusters (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 9, the combination of Burns and McCuiston teaches the method of claim 1, including further comprising: identifying, from the second IP assets, foreign IP assets and design IP assets as third IP assets; and removing the third IP assets from the second IP assets prior to generating the data representing the one or more result sets (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 18, Burns discloses a method comprising:
identifying a first entity having first intellectual property (IP) assets (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3);
generating a graphical user interface (GUI) configured to display on a computing device, the GUI configured to: display second entities having second IP assets that are similar to one or more of the first IP assets (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development); and
receive an input from the computing device(see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3);
receiving, via the GUI, input data representing the input, the input data indicating selection of at least one of the second entities as selected entities (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development);
generating a machine learning model configured to generate clusters of IP assets (see also Burns: Col. 28 Line 1-13, Col. 33 Line 5-16, Col. 39 Line 23-56, Col. 42 Line 6-25, Col. 50 Line 20-28, Col. 68 Line 10-65, and Col. 72 Line 1-28: all discussing the utilization of machine learning algorithms that are used to score the technology cluster/group/classification);
receiving feedback data indicating performance of the machine learning model (further addressed below);
generating a training dataset utilizing a transformed version of the feedback data (further addressed below);
training the machine learning model utilizing the training dataset such that a trained machine learning model is generated (further addressed below);
generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model (further addressed below)
generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model, first data representing a first result set including at least: a first cluster including a first portion of the second IP assets associated with the selected entities (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development; see also Burns: Col. 28 Line 1-13, Col. 33 Line 5-16, Col. 39 Line 23-56, Col. 42 Line 6-25, Col. 50 Line 20-28, Col. 68 Line 10-65, and Col. 72 Line 1-28: all discussing the utilization of machine learning algorithms that are used to score the technology cluster/group/classification);
a first metric associated with the second IP assets included in the first portion (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development);
a second cluster including a second portion of the second IP assets associated with the selected entities (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development); and
a second metric associated with the second IP assets included in the second portion (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development); and
causing the GUI to display the first result set (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52, Col. : discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Burns specifically discusses the system training the model (see at least Burns: Col. 28 Lin 1-12 “The system further including a plurality of backend data enhancement and management decision aids. The system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system. The system is further configured for automatic quantitative assessment of opportunities and risks based on a plurality of confidence factors”; see also Burns: Col. 33 Line 10-17; see also Burns: Col. 39 Line 39-54 ).
Burns further discloses utilizing “predictive analytics techniques including, but not limited to, machine learning (ML), artificial intelligence (AI), neural networks (NNs) (e.g., long short term memory (LSTM) neural networks), deep learning, historical data, and/or data mining to make future predictions and/or models. The predictive analytics includes business planning, technical risks, secondary and tertiary effects of trade-offs, model course of action (COA) in real time, real-time market trends, and the cross track impacts of these elements. The TCF platform is preferably operable to recommend and/or perform actions based on historical data, external data sources, ML, AI, NNs, and/or other learning techniques” (see at least Burns: Col. 81 Line 35-48; see also Burns: Col. 79 Line 45-65).
Burns does not go into the specific details about using the model feedback information to train the model assessing the assets.
However, McCuiston, teaches a method for risk management and compliance associated with products, teaches it is known to:
receiving feedback data indicating performance of the machine learning model (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50);
generating a training dataset utilizing a transformed version of the feedback data (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50);
training the machine learning model utilizing the training dataset such that a trained machine learning model is generated (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50);
generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model (see at least McCuiston: Col. 3 Line 20-36 “The decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model and may train the machine learning model with the processed historical risk management data to generate a trained machine learning model”; see also McCuiston: Col. 5 Line 17-25 “the decision platform may process the historical risk management data to generate processed historical risk management data in a format for training a machine learning model”; see also McCuiston: Col. 7 Line 1-35, and Col. 8 Line 6-13 “may improve an accuracy of the trained machine learning model generated by the decision platform”; see also McCuiston: Col. 7 Line 54-67; see also McCuiston: Col. 10 Line 32-38; see also McCuiston: ¶ Col. 15 Line 19-38 and Col. 16 Line 40-49; see also McCuiston: Col. 17 Line 25-43; see also McCuiston: Col. 19 Line 35-46; see also McCuiston: Col. 20 Line 20-50)
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes (as disclosed by McCuiston) into the system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system (as disclosed by Burns). One of ordinary skill in the art would have been motivated to incorporate the feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes because it would generate one or more complexity levels, risk data, or recommendations associated with the potential product (see McCuiston: Col. 1 Line 35-42).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to incorporate the feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes (as disclosed by McCuiston) into the system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system (as disclosed by Burns), because the claimed invention is merely a simple arrangement of old elements, with each performing the same function it had been known to perform, yielding no more than one would expect from such arrangement. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by adding the well-known feature of receiving, transforming, and training the machine learning model configured to be utilized by the machine learning model for training purposes into the system further including automated machine learning and artificial intelligence methods, wherein the automated machine learning and artificial intelligence methods are trained to enhance and improve the management system). See also MPEP § 2143(I)(A).
Referring to Claim 19, the combination of Burns and McCuiston teaches the method of claim 18, including wherein at least one of the first metric or the second metric include a coverage metric, an opportunity metric, and an exposure metric (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Referring to Claim 20, the combination of Burns and McCuiston teaches the method of claim 18, wherein at least one of the first metric or the second metric includes a comprehensive metric based at least in part on a coverage metric, an opportunity metric, and an exposure metric (see at least Burns: Col. 41 Line 33-67 and Col. 42 Line 5-35; see at least Burns Cols. 52-54: discussing the plethora of information collected, analyzed, quantified, and presented to users including market information; see also Burns: Col. 67-68: discussing the system quantifying a plurality of technologies and assets into a score and aggregating a plurality of scores to identify the potential success; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Col. 50 Line 17-28; see also Burns: Col. 27 Line 20-55; see also Burns: Col. 56 Line 27-59 and Col. 78 Line 1-3; see also Burns: Cols. 61-62: discussing scoring ideas, patens, products, and services to known patents and publications;; see also Burns: Col. 48 Line 1-40, Col. 49 Line 10-58, Col. 51 Line 25-55, Col. 52 Line 29-52: discussing phase 0 through 2 discussing the formulation, adjustment, and tailoring of solutions to technical challenges; see also Burns Col. 54, Col. 55 Line 15-67, Col. 63-64: discussing and identifying disruptive innovation determination and development).
Response to Arguments
Applicant's arguments filed with respect to the rejection of the claims under 101 have been fully considered but they are not persuasive. The rejection has been updated to address the amendments submitted.
Claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); and a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011);
Examiner notes that the claimed invention is directed to concepts that are performed mentally and a product of human mental work. Because the limitations above closely follow the steps of processing information related to market information associated to quantified technology, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the see MPEP 2106.04(a)(2)(III).
Examiner notes that the claimed invention amounts to a mental process in that the system is collecting command information, analyzing or processing the information to determine metrics related to intellectual property, and the subsequently displaying of the results based on the determination which is similar to the abstract ideas identified in Electric Power Group and Classen.
Alternatively, the sub-grouping "managing personal behavior or relationships or interactions between people" include social activities, teaching, and following rules or instructions. An example of a claim reciting managing personal behavior is Intellectual Ventures I LLC v. Capital One Bank (USA), 792 F.3d 1363, 115 USPQ2d 1636 (Fed. Cir. 2015). The patentee in this case claimed methods comprising storing user-selected pre-set limits on spending in a database, and when one of the limits is reached, communicating a notification to the user via a device. 792 F.3d. at 1367, 115 USPQ2d at 1639-40. The Federal Circuit determined that the claims were directed to the abstract idea of "tracking financial transactions to determine whether they exceed a pre-set spending limit (i.e., budgeting)", which "is not meaningfully different from the ideas found to be abstract in other cases before the Supreme Court and our court involving methods of organizing human activity." 792 F.3d. at 1367-68, 115 USPQ2d at 1640. Other examples of managing personal behavior recited in a claim include: ii. considering historical usage information while inputting data, BSG Tech. LLC v. Buyseasons, Inc., 899 F.3d 1281, 1286, 127 USPQ2d 1688, 1691 (Fed. Cir. 2018).
Examiner notes that claims 1-20 recite a processing and displaying information related to intellectual property assets, and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as organizing human activity of fundamental economic activities, legal/business interactions and risk management (i.e., technology/asset analyzing, valuing, processing, and protecting). This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in intellectual property management. This is common practice when IP creation, production, prosecution, litigation, and management of IP assets and technology. Because the limitations above closely follow the steps standard in interactions between people and businesses such as technology/asset analyzing, valuing, processing, and protecting), and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II).
Examiner notes that the claimed invention is more similar to the abstract ideas in BSG and Intellectual Ventures in that the system is processing historical intellectual property information and tracked information related to inventions to output an analysis specific to the technology.
Furthermore, the mathematical concepts grouping is defined as mathematical relationships, mathematical formulas or equations, and mathematical calculations. The Supreme Court has identified a number of concepts falling within this grouping as abstract ideas including: a procedure for converting binary-coded decimal numerals into pure binary form, Gottschalk v. Benson, 409 U.S. 63, 65, 175 USPQ2d 673, 674 (1972); a mathematical formula for calculating an alarm limit, Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ2d 193, 195 (1978); the Arrhenius equation, Diamond v. Diehr, 450 U.S. 175, 191, 209 USPQ 1, 15 (1981); and a mathematical formula for hedging, Bilski v. Kappos, 561 U.S. 593, 611, 95 USPQ 2d 1001, 1004 (2010).
When determining whether a claim recites a mathematical concept (i.e., mathematical relationships, mathematical formulas or equations, and mathematical calculations), examiners should consider whether the claim recites a mathematical concept or merely limitations that are based on or involve a mathematical concept. A claim does not recite a mathematical concept (i.e., the claim limitations do not fall within the mathematical concept grouping), if it is only based on or involves a mathematical concept. See, e.g., Thales Visionix, Inc. v. United States, 850 F.3d 1343, 1348-49, 121 USPQ2d 1898, 1902-03 (Fed. Cir. 2017) (determining that the claims to a particular configuration of inertial sensors and a particular method of using the raw data from the sensors in order to more accurately calculate the position and orientation of an object on a moving platform did not merely recite "the abstract idea of using ‘mathematical equations for determining the relative position of a moving object to a moving reference frame’."). For example, a limitation that is merely based on or involves a mathematical concept described in the specification may not be sufficient to fall into this grouping, provided the mathematical concept itself is not recited in the claim.
Furthermore, the claims recite the familiar concept of property valuation. As the Supreme Court explained in Alice, claims involving “a fundamental economic practice long prevalent in our system of commerce,” such as the concepts of hedging and inter-mediated settlement, are patent-ineligible abstract ideas. Alice, 134 S. Ct. at 2356 (quoting Bilski v. Kappos, 561 U.S. 593, 611 (2010)). It follows that the claims at issue here are directed to an abstract idea. Applicants’ claims recite one or more computers configured to receive a user’s property valuations, and display that information. Like the risk hedging in Bilski and the concept of intermediated settlement in Alice, the concept of property valuation, that is, determining a property’s market value, is “a fundamental economic practice long prevalent in our system of commerce.” Id. (quoting Bilski, 561 U.S. at 611). Prospective sellers and buyers have long valued property and doing so is necessary to the functioning of the residential real estate market. As such, claim 1 is directed to the abstract idea of intellectual property valuation and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as fundamental economic practices. This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in intellectual property creation, valuation, protection, and management. This is common practice when creating and protecting any intellectual property. Because the limitations above closely follow the steps standard in fundamental economic practices such as process valuation, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II).
Examples of mathematical calculations recited in a claim include: performing a resampled statistical analysis to generate a resampled distribution, SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163-65, 127 USPQ2d 1597, 1598-1600 (Fed. Cir. 2018), modifying SAP America, Inc. v. InvestPic, LLC, 890 F.3d 1016, 126 USPQ2d 1638 (Fed. Cir. 2018); using an algorithm for determining the optimal number of visits by a business representative to a client, In re Maucorps, 609 F.2d 481, 482, 203 USPQ 812, 813 (CCPA 1979); and calculating the difference between local and average data values, In re Abele, 684 F.2d 902, 903, 214 USPQ 682, 683-84 (CCPA 1982).
Examiner notes that the claimed invention is similar to the identified use of mathematical calculations discussed in Bilski, SAP America, Inc., In re Maucorps, In re Abele. and Claim 2 of Example 47.
Examiner initially draws attention to Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023), in which the Court decided in a case with a more specific use of machine learning than instant application that the claims involving a trained model updating upon new information amounts to merely applying the known computer elements as a tool to implement the method. The claimed invention is similar to the claims found within Recentive in that the model is being trained to output information and the instant application is similar to the identified use of “training” a model. No improvement to the overall method of machine learning or algorithm is presented. The mere use of machine learning as a tool to update a document does not render the claims patentable subject matter. “Considering the focus of the disputed claims, Alice, 573 U.S. at 217, it is clear that they are directed to ineligible, abstract subject matter. Recentive has repeatedly conceded that it is not claiming machine learning itself. See Appellant’s Br. 45; Transcript at 26:14–15. Both sets of patents rely on the use of generic machine learning technology in carrying out the claimed methods for generating event schedules and network maps. See, e.g., ’367 patent, col. 6 ll. 1–5, col. 11–12; ’811 patent, col. 3, l. 23, col. 5 l. 4. The machine learning technology described in the patents is conventional, as the patents’ specifications demonstrate. See, e.g., ’367 patent, col. 6 ll. 1–5 (requiring “any suitable machine learning technology . . . such as, for example: a gradient boosted random forest, a regression, a neural network, a decision tree, a support vector machine, a Bayesian network, [or] other type of technique”); ’811 patent, col. 3 l. 23 (requiring the application of “any suitable machine learning technique.”).” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)).
“Instead of disclosing “a specific implementation of a solution to a problem in the software arts,” Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339 (Fed. Cir. 2016), or “a specific means or method that solves a problem in an existing technological process,” Koninklijke, 942 F.3d at 1150, the only thing the claims disclose about the use of machine learning is that machine learning is used in a new environment.” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)).
“the claimed methods are not rendered patent eligible by the fact that (using existing machine learning technology) they perform a task previously undertaken by humans with greater speed and efficiency than could previously be achieved. We have consistently held, in the context of computer-assisted methods, that such claims are not made patent eligible under § 101 simply because they speed up human activity. See, e.g., Content Extraction, 776 F.3d at 1347; DealerTrack, 674 F.3d at 1333. Whether the issue is raised at step one or step two, the increased speed and efficiency resulting from use of computers (with no improved computer techniques) do not themselves create eligibility. See, e.g., Trinity Info Media, LLC v. Covalent, Inc., 72 F.4th 1355, 1363 (Fed. Cir. 2023) (rejecting argument that “humans could not mentally engage in the ‘same claimed process’ because they could not perform ‘nanosecond comparisons’ and aggregate ‘result values with huge numbers of polls and members’”) (internal citation omitted); Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1365 (Fed. Cir. 2020) (holding claims abstract where “[t]he only improvements identified in the specification are generic speed and efficiency improvements inherent in applying the use of a computer to any task”); compare McRo, 837 F.3d at 1314-16 (finding eligibility of claims to use specific computer techniques different from those humans use on their own to produce natural-seeming lip motion for speech)” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)).
Similar to Recentive “nothing in the claims, whether considered individually or in their ordered combination, that would transform the Machine Learning Training and Network Map patents into something “significantly more” than the abstract idea of generating event schedules and network maps through the application of machine learning. See SAP Am., 898 F.3d at 1169–70; Broadband iTV, 113 F.4th at 1372. Recentive has also failed to identify any allegation in its complaint that would suffice to plausibly allege an inventive concept to defeat Fox’s motion to dismiss. Trinity, 72 F.4th at 1365” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)). Examiner notes that the claims are similar to the identified unpatentable subject matter in that the system is merely applying standard machine learning elements in the processing and managing IP asset information.
Therefore, the claims are directed to an abstract idea in the form of a judicial exception.
Claims can recite a mental process even if they are claimed as being performed on a computer. The Supreme Court recognized this in Benson, determining that a mathematical algorithm for converting binary coded decimal to pure binary within a computer’s shift register was an abstract idea. The Court concluded that the algorithm could be performed purely mentally even though the claimed procedures "can be carried out in existing computers long in use, no new machinery being necessary." 409 U.S at 67, 175 USPQ at 675. See also Mortgage Grader, 811 F.3d at 1324, 117 USPQ2d at 1699 (concluding that concept of "anonymous loan shopping" recited in a computer system claim is an abstract idea because it could be "performed by humans without a computer").
In evaluating whether a claim that requires a computer recites a mental process, examiners should carefully consider the broadest reasonable interpretation of the claim in light of the specification. For instance, examiners should review the specification to determine if the claimed invention is described as a concept that is performed in the human mind and applicant is merely claiming that concept performed 1) on a generic computer, or 2) in a computer environment, or 3) is merely using a computer as a tool to perform the concept. In these situations, the claim is considered to recite a mental process.
An example of a case identifying a mental process performed on a generic computer as an abstract idea is Voter Verified, Inc. v. Election Systems & Software, LLC, 887 F.3d 1376, 1385, 126 USPQ2d 1498, 1504 (Fed. Cir. 2018). In this case, the Federal Circuit relied upon the specification in explaining that the claimed steps of voting, verifying the vote, and submitting the vote for tabulation are "human cognitive actions" that humans have performed for hundreds of years. The claims therefore recited an abstract idea, despite the fact that the claimed voting steps were performed on a computer. 887 F.3d at 1385, 126 USPQ2d at 1504. Another example is FairWarning IP, LLC v. Iatric Sys., Inc., 839 F.3d 1089, 120 USPQ2d 1293 (Fed. Cir. 2016). The patentee in FairWarning claimed a system and method of detecting fraud and/or misuse in a computer environment, in which information regarding accesses of a patient’s personal health information was analyzed according to one of several rules (i.e., related to accesses in excess of a specific volume, accesses during a pre-determined time interval, or accesses by a specific user) to determine if the activity indicates improper access. 839 F.3d. at 1092, 120 USPQ2d at 1294. The court determined that these claims were directed to a mental process of detecting misuse, and that the claimed rules here were "the same questions (though perhaps phrased with different words) that humans in analogous situations detecting fraud have asked for decades, if not centuries." 839 F.3d. at 1094-95, 120 USPQ2d at 1296.
An example of a case in which a computer was used as a tool to perform a mental process is Mortgage Grader, 811 F.3d. at 1324, 117 USPQ2d at 1699. The patentee in Mortgage Grader claimed a computer-implemented system for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The interface prompts a borrower to enter personal information, which the grading module uses to calculate the borrower’s credit grading, and allows the borrower to identify and compare loan packages in the database using the credit grading. 811 F.3d. at 1318, 117 USPQ2d at 1695. The Federal Circuit determined that these claims were directed to the concept of "anonymous loan shopping", which was a concept that could be "performed by humans without a computer." 811 F.3d. at 1324, 117 USPQ2d at 1699. Another example is Berkheimer v. HP, Inc., 881 F.3d 1360, 125 USPQ2d 1649 (Fed. Cir. 2018), in which the patentee claimed methods for parsing and evaluating data using a computer processing system. The Federal Circuit determined that these claims were directed to mental processes of parsing and comparing data, because the steps were recited at a high level of generality and merely used computers as a tool to perform the processes. 881 F.3d at 1366, 125 USPQ2d at 1652-53.
Both product claims (e.g., computer system, computer-readable medium, etc.) and process claims may recite mental processes. For example, in Mortgage Grader, the patentee claimed a computer-implemented system and a method for enabling borrowers to anonymously shop for loan packages offered by a plurality of lenders, comprising a database that stores loan package data from the lenders, and a computer system providing an interface and a grading module. The Federal Circuit determined that both the computer-implemented system and method claims were directed to "anonymous loan shopping", which was an abstract idea because it could be "performed by humans without a computer." 811 F.3d. at 1318, 1324-25, 117 USPQ2d at 1695, 1699-1700. See also FairWarning IP, 839 F.3d at 1092, 120 USPQ2d at 1294 (identifying both system and process claims for detecting improper access of a patient's protected health information in a health-care system computer environment as directed to abstract idea of detecting fraud); Content Extraction & Transmission LLC v. Wells Fargo Bank, N.A., 776 F.3d 1343, 1345, 113 USPQ2d 1354, 1356 (Fed. Cir. 2014) (system and method claims of inputting information from a hard copy document into a computer program). Accordingly, the phrase "mental processes" should be understood as referring to the type of abstract idea, and not to the statutory category of the claim.
Examples of product claims reciting mental processes include: An application program interface for extracting and processing information from a diversity of types of hard copy documents – Content Extraction, 776 F.3d at 1345, 113 USPQ2d at 1356; and A computer readable medium containing program instructions for detecting fraud – CyberSource, 654 F.3d at 1368 n. 1, 99 USPQ2d at 1692 n.1.
Examiner notes that the claimed in invention is similar to the Voter Verified, Inc., FairWarning, Mortgage Grader, Berkheimer, Content Extraction and CyberSource applications wherein the court identified computer system, “processors”, or “generating a machine learning model configured to generate clusters of IP assets; receiving feedback data indicating performance of the machine learning model; generating a training dataset utilizing a transformed version of the feedback data; training the machine learning model utilizing the training dataset such that a trained machine learning model is generated; generating, based at least in part on the second IP assets associated with the selected entities and utilizing the trained machine learning model” is merely serving as the generic computer, computing environment, or tool to perform the mental process or abstract idea.
The second part of the Alice/Mayo test is often referred to as a search for an inventive concept. Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 573 U.S. 208, 217, 110 USPQ2d 1976, 1981 (2014) (citing Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 71-72, 101 USPQ2d 1961, 1966 (2012)).
Evaluating additional elements to determine whether they amount to an inventive concept requires considering them both individually and in combination to ensure that they amount to significantly more than the judicial exception itself. Because this approach considers all claim elements, the Supreme Court has noted that "it is consistent with the general rule that patent claims ‘must be considered as a whole.’" Alice Corp., 573 U.S. at 218 n.3, 110 USPQ2d at 1981 (quoting Diamond v. Diehr, 450 U.S. 175, 188, 209 USPQ 1, 8-9 (1981)). Consideration of the elements in combination is particularly important, because even if an additional element does not amount to significantly more on its own, it can still amount to significantly more when considered in combination with the other elements of the claim. See, e.g., Rapid Litig. Mgmt. v. CellzDirect, 827 F.3d 1042, 1051, 119 USPQ2d 1370, 1375 (Fed. Cir. 2016) (process reciting combination of individually well-known freezing and thawing steps was "far from routine and conventional" and thus eligible); BASCOM Global Internet Servs. v. AT&T Mobility LLC, 827 F.3d 1341, 1350, 119 USPQ2d 1236, 1242 (Fed. Cir. 2016) (inventive concept may be found in the non-conventional and non-generic arrangement of components that are individually well-known and conventional).
Limitations that the courts have found not to be enough to qualify as "significantly more" when recited in a claim with a judicial exception include ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 573 U.S. at 225, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); and Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook, 437 U.S. 584, 588-90, 198 USPQ 193, 197-98 (1978) (MPEP § 2106.05(h)).
It is important to note that in order for a method claim to improve computer functionality, the broadest reasonable interpretation of the claim must be limited to computer implementation. That is, a claim whose entire scope can be performed mentally, cannot be said to improve computer technology. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 120 USPQ2d 1473 (Fed. Cir. 2016) (a method of translating a logic circuit into a hardware component description of a logic circuit was found to be ineligible because the method did not employ a computer and a skilled artisan could perform all the steps mentally). Similarly, a claimed process covering embodiments that can be performed on a computer, as well as embodiments that can be practiced verbally or with a telephone, cannot improve computer technology. See RecogniCorp, LLC v. Nintendo Co., 855 F.3d 1322, 1328, 122 USPQ2d 1377, 1381 (Fed. Cir. 2017) (process for encoding/decoding facial data using image codes assigned to particular facial features held ineligible because the process did not require a computer).
Examples that the courts have indicated may not be sufficient to show an improvement in computer-functionality: ii. Accelerating a process of analyzing audit log data when the increased speed comes solely from the capabilities of a general-purpose computer, FairWarning IP, LLC v. Iatric Sys., 839 F.3d 1089, 1095, 120 USPQ2d 1293, 1296 (Fed. Cir. 2016), iii. Mere automation of manual processes, such as using a generic computer to process an application for financing a purchase, Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 1055, 123 USPQ2d 1100, 1108-09 (Fed. Cir. 2017) or speeding up a loan-application process by enabling borrowers to avoid physically going to or calling each lender and filling out a loan application, LendingTree, LLC v. Zillow, Inc., 656 Fed. App'x 991, 996-97 (Fed. Cir. 2016) (non-precedential); vii. Providing historical usage information to users while they are inputting data, in order to improve the quality and organization of information added to a database, because "an improvement to the information stored by a database is not equivalent to an improvement in the database’s functionality," BSG Tech LLC v. Buyseasons, Inc., 899 F.3d 1281, 1287-88, 127 USPQ2d 1688, 1693-94 (Fed. Cir. 2018).
To show that the involvement of a computer assists in improving the technology, the claims must recite the details regarding how a computer aids the method, the extent to which the computer aids the method, or the significance of a computer to the performance of the method. Merely adding generic computer components to perform the method is not sufficient. Thus, the claim must include more than mere instructions to perform the method on a generic component or machinery to qualify as an improvement to an existing technology. See MPEP § 2106.05(f) for more information about mere instructions to apply an exception.
Examples that the courts have indicated may not be sufficient to show an improvement to technology include: i. A commonplace business method being applied on a general purpose computer, Alice Corp., 573 U.S. at 223, 110 USPQ2d at 1976; Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015).
The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to a method instructing the reader to implement the identified method of organizing human activity of fundamental economic activities, legal/business interactions and risk management (i.e., technology/asset analyzing, valuing, processing, and protecting) on generically claimed computer structure. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as “interface”, “processor”, and “memory” recited at a high level of generality. These elements do not themselves amount to an improvement to the interface or computer, to a technology or another technical field. This is consistent with Applicant’s disclosure which states that the computing device “The user interface(s) 114 may be configured to display information associated with the IP landscaping platform and to receive user input associated with the IP landscaping platform. [00100] The remote computing resources 104 may include one or more components such as, for example, one or more processors 116, one or more network interfaces 118, and/or computer-readable media 120. The computer-readable media 120 may include one or more components, such as, for example, a landscaping component 122, an scoring component 124, and/or one or more data store(s) 126.”. (App. Spec. 99).
Accordingly, the claimed “system” read in light of the specification employs any wide range of possible devices comprising a number of components that are “well-known” and included in an indiscriminate “interface”, “processor”, “a machine learning model”, “generating a machine learning model configured to generate metrics associated with IP assets”; “receiving historical data associated with the IP assets from multiple different databases”; “transforming the historical data into a training dataset configured to be utilized by the machine learning model for training purposes”; “training the machine learning model utilizing the training dataset such that a specifically trained machine learning model is generated” and “memory” (e.g., processing device, modules). Thus, the claimed structure amounts to appending generic computer elements to abstract idea comprising the body of the claim. The computing elements are only involved at a general, high level, and do not have the particular role within any of the functions but to be a computer-implemented method using a generically claimed “interface”, “processor”, and “memory” and even basic, generic recitations that imply use of the computer such as storing information via servers would add little if anything to the abstract idea.
Examiner notes that the claimed invention is more like the implementations of computer elements found in FairWarning IP, LLC, Credit Acceptance Corp. v. Westlake Services, LendingTree, LLC v. Zillow, Inc., BSG Tech LLC v. Buyseasons, Inc., Alice Corp. and Versata Dev. Group, Inc. and fail to implement a technical improvement, practical application, or significantly more than the abstract idea.
The claims stand rejected.
102 Rejections
Applicant's arguments filed with respect of the claims under 35 USC 102 have been fully considered and the rejection has been updated based on further search and consideration necessitated by the submitted amendments.
The claims stand rejected.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL C YOUNG whose telephone number is (571)272-1882. The examiner can normally be reached M-F: 7:00 p.m.- 3:00 p.m. EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nate Uber can be reached at (571)270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Michael Young/Examiner, Art Unit 3626