DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This communication is Final Office Action in response to the Track One Request filed on the 1st day of October, 2025. Currently claims 1-20 are pending. No claims are allowed.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. §101 because the claimed invention is directed to judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) with no practical application and without significantly more.
Under MPEP 2106, when considering subject matter eligibility under 35 U.S.C. § 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (step 1). If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (step 2A prong 1), and if so, it must additionally be determined whether the claim is integrated into a practical application (step 2A prong 2). If an abstract idea is present in the claim without integration into a practical application, any element or combination of elements in the claim must be sufficient to ensure that the claim amounts to significantly more than the abstract idea itself (step 2B).
In the instant case, claims 1-20 are directed to a system, and non-transitory computer-readable media. Thus, each of the claims falls within one of the four statutory categories (step 1). However, the claims also fall within the judicial exception of an abstract idea (step 2). While claims 1 and 11, are directed to different categories, the language and scope are substantially the same and have been addressed together below.
Under Step 2A Prong 1, the test is to identify whether the claims are “directed to” a judicial exception. Examiner notes that the claimed invention is directed to an abstract idea in that the instant application is directed to mathematical calculations (see MPEP 2106.04(a)(2)(I), certain methods of organizing human activity specifically commercial interactions and behaviors and managing personal behavior and/or interactions between people (see MPEP 2106.04(a)(2)(II)) and mental processes (see MPEP 2106.04(a)(2)(III).
Examiner notes that the system is directed to a mental process. The courts consider a mental process (thinking) that "can be performed in the human mind, or by a human using a pen and paper" to be an abstract idea. CyberSource Corp. v. Retail Decisions, Inc., 654 F.3d 1366, 1372, 99 USPQ2d 1690, 1695 (Fed. Cir. 2011). As the Federal Circuit explained, "methods which can be performed mentally, or which are the equivalent of human mental work, are unpatentable abstract ideas the ‘basic tools of scientific and technological work’ that are open to all.’" 654 F.3d at 1371, 99 USPQ2d at 1694 (citing Gottschalk v. Benson, 409 U.S. 63, 175 USPQ 673 (1972)). See also Mayo Collaborative Servs. v. Prometheus Labs. Inc., 566 U.S. 66, 71, 101 USPQ2d 1961, 1965 (2012) ("‘[M]ental processes[] and abstract intellectual concepts are not patentable, as they are the basic tools of scientific and technological work’" (quoting Benson, 409 U.S. at 67, 175 USPQ at 675)); Parker v. Flook, 437 U.S. 584, 589, 198 USPQ 193, 197 (1978) (same).
Claims do recite a mental process when they contain limitations that can practically be performed in the human mind, including for example, observations, evaluations, judgments, and opinions. Examples of claims that recite mental processes include: a claim to "collecting information, analyzing it, and displaying certain results of the collection and analysis," where the data analysis steps are recited at a high level of generality such that they could practically be performed in the human mind, Electric Power Group v. Alstom, S.A., 830 F.3d 1350, 1353-54, 119 USPQ2d 1739, 1741-42 (Fed. Cir. 2016); and a claim to collecting and comparing known information (claim 1), which are steps that can be practically performed in the human mind, Classen Immunotherapies, Inc. v. Biogen IDEC, 659 F.3d 1057, 1067, 100 USPQ2d 1492, 1500 (Fed. Cir. 2011);
Examiner notes that claims 1-20 recite a system for accessing a request for lineage tracking of assets, the computer-implemented method comprising: accessing a request for lineage tracking, the request comprising information relating to a first asset; determining one or more first metadata items of the first asset; extracting a portion of the first asset using a first machine learning model, wherein the portion of the first asset comprises at least one of: a foreground, a background, or a subject; determining a first vector embedding of the portion of the first asset; comparing the one or more first metadata items of the first asset to a plurality of second metadata items associated with a plurality of second assets; comparing the first vector embedding to a plurality of second vector embeddings associated with the plurality of second assets; [[and]] identifying, based on (1) comparing the one or more first metadata items of the first asset to the plurality of second metadata items associated with the plurality of second assets and (2) comparing the first vector embedding to the plurality of second vector embeddings associated with the plurality of second assets, one or more potentially related assets; and generating an output indicating the one or more potentially related assets, wherein the one or more potentially related assets are selected from the plurality of second assets, wherein the one or more potentially related assets are identified by at least one of: determining that the one or more first metadata items have more than a first threshold similarity with second metadata items of the plurality of second metadata items, or determining that the first vector embedding has more than a second threshold similarity with second vector vectors of the plurality of second vector embeddings which is directed to concepts that are performed mentally and a product of human mental work. Because the limitations above closely follow the steps of receiving asset information, processing the information using embedding vectors and features, generating an output based on the processing and comparison, and the steps involved human judgments, observations and evaluations that can be practically or reasonably performed in the human mind, the claim recites an abstract idea consistent with the “mental process” grouping set forth in the see MPEP 2106.04(a)(2)(III).
Examiner notes that the claimed invention amounts to a mental process in that the system is receiving information related to assets, processing the received information, and generating an output based on the processing, and the subsequently performing the command based on the determination which is similar to the abstract ideas identified in Electric Power Group and Classen.
Alternatively, The phrase "methods of organizing human activity" is used to describe concepts relating to: fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations); and managing personal behavior or relationships or interactions between people, (including social activities, teaching, and following rules or instructions).
The Supreme Court has identified a number of concepts falling within the "certain methods of organizing human activity" grouping as abstract ideas. In particular, in Alice, the Court concluded that the use of a third party to mediate settlement risk is a ‘‘fundamental economic practice’’ and thus an abstract idea. 573 U.S. at 219–20, 110 USPQ2d at 1982. In addition, the Court in Alice described the concept of risk hedging identified as an abstract idea in Bilski as ‘‘a method of organizing human activity’’. Id. Previously, in Bilski, the Court concluded that hedging is a ‘‘fundamental economic practice’’ and therefore an abstract idea. 561 U.S. at 611–612, 95 USPQ2d at 1010.
The courts have used the phrases "fundamental economic practices" or "fundamental economic principles" to describe concepts relating to the economy and commerce. Fundamental economic principles or practices include hedging, insurance, and mitigating risks.
The term "fundamental" is not used in the sense of necessarily being "old" or "well-known." See, e.g., OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 1364, 115 U.S.P.Q.2d 1090, 1092 (Fed Cir. 2015) (a new method of price optimization was found to be a fundamental economic concept); In re Smith, 815 F.3d 816, 818-19, 118 USPQ2d 1245, 1247 (Fed. Cir. 2016) (describing a new set of rules for conducting a wagering game as a "fundamental economic practice"); In re Greenstein, 774 Fed. Appx. 661, 664, 2019 USPQ2d 212400 (Fed Cir. 2019) (non-precedential) (claims to a new method of allocating returns to different investors in an investment fund was a fundamental economic concept). However, being old or well-known may indicate that the practice is fundamental. See, e.g., Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 219-20, 110 USPQ2d 1981-82 (2014) (describing the concept of intermediated settlement, like the risk hedging in Bilski, to be a "‘fundamental economic practice long prevalent in our system of commerce’" and also as "a building block of the modern economy") (citation omitted); Bilski v. Kappos, 561 U.S. 593, 611, 95 USPQ2d 1001, 1010 (2010) (claims to the concept of hedging are a "fundamental economic practice long prevalent in our system of commerce and taught in any introductory finance class.") (citation omitted); Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1313, 120 USPQ2d 1353, 1356 (Fed. Cir. 2016) ("The category of abstract ideas embraces ‘fundamental economic practice[s] long prevalent in our system of commerce,’ … including ‘longstanding commercial practice[s]’").
Other examples of "fundamental economic principles or practices" include: i. mitigating settlement risk, Alice Corp. v. CLS Bank,573 U.S. 208, 218, 110 USPQ2d 1976, 1982 (2014); and iii. financial instruments that are designed to protect against the risk of investing in financial instruments, In re Chorna, 656 Fed. App'x 1016, 1021 (Fed. Cir. 2016) (non-precedential).
Examiner notes that claims 1-20 recite a system for accessing a request for lineage tracking of assets, the computer-implemented method comprising: accessing a request for lineage tracking, the request comprising information relating to a first asset; determining one or more first metadata items of the first asset; extracting a portion of the first asset using a first machine learning model, wherein the portion of the first asset comprises at least one of: a foreground, a background, or a subject; determining a first vector embedding of the portion of the first asset; comparing the one or more first metadata items of the first asset to a plurality of second metadata items associated with a plurality of second assets; comparing the first vector embedding to a plurality of second vector embeddings associated with the plurality of second assets; [[and]] identifying, based on (1) comparing the one or more first metadata items of the first asset to the plurality of second metadata items associated with the plurality of second assets and (2) comparing the first vector embedding to the plurality of second vector embeddings associated with the plurality of second assets, one or more potentially related assets; and generating an output indicating the one or more potentially related assets, wherein the one or more potentially related assets are selected from the plurality of second assets, wherein the one or more potentially related assets are identified by at least one of: determining that the one or more first metadata items have more than a first threshold similarity with second metadata items of the plurality of second metadata items, or determining that the first vector embedding has more than a second threshold similarity with second vector vectors of the plurality of second vector embeddings, and is similar to the abstract idea identified in MPEP 2106.04(a)(2)(II) in grouping “II” in that the claims recite certain methods of organizing human activity such as legal interactions. This is merely further embellishments of the abstract idea and does not further limit the claimed invention to render the claims patentable subject matter. The limitations, substantially comprising the body of the claim, recite standard processes found in standard practice in legal practices and business interactions. Managing and processing asset information in order to store and track asset information in order to find similarities between one or more assets has been done long before the invention of computers and is standard in the field of investments, intellectual property, real estate, etc. This is common practice in most property analysis management is to identify information related to assets, identify similarities, and make decisions based on the comparison. Because the limitations above closely follow the steps standard in legal interactions, and the steps of the claims involve organizing human activity, the claim recites an abstract idea consistent with the “organizing human activity” grouping set forth in the see MPEP 2106.04(a)(2)(II).
Examiner notes that the claimed invention is more similar to the identified abstract ideas within Alice and In re Chorna.
Furthermore, "Commercial interactions" or "legal interactions" include agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, and business relations.
An example of a claim reciting a commercial or legal interaction, where the interaction is an agreement in the form of contracts, is found in buySAFE, Inc. v. Google, Inc., 765 F.3d. 1350, 112 USPQ2d 1093 (Fed. Cir. 2014). The agreement at issue in buySAFE was a transaction performance guaranty, which is a contractual relationship. 765 F.3d at 1355, 112 USPQ2d at 1096. The patentee claimed a method in which a computer operated by the provider of a safe transaction service receives a request for a performance guarantee for an online commercial transaction, the computer processes the request by underwriting the requesting party in order to provide the transaction guarantee service, and the computer offers, via a computer network, a transaction guaranty that binds to the transaction upon the closing of the transaction. 765 F.3d at 1351-52, 112 USPQ2d at 1094. The Federal Circuit described the claims as directed to an abstract idea because they were "squarely about creating a contractual relationship--a ‘transaction performance guaranty’." 765 F.3d at 1355, 112 USPQ2d at 1096.
Examiner notes that the claimed invention is similar to the abstract idea found in buySAFE v. Google, Inc., in that the claimed invention is directed to tracing assets and their associated metadata to identify similar assets.
An example of a claim reciting a commercial or legal interaction in the form of a legal obligation is found in Fort Properties, Inc. v. American Master Lease, LLC, 671 F.3d 1317, 101 USPQ2d 1785 (Fed Cir. 2012). The patentee claimed a method of "aggregating real property into a real estate portfolio, dividing the interests in the portfolio into a number of deedshares, and subjecting those shares to a master agreement." 671 F.3d at 1322, 101 USPQ2d at 1788. The legal obligation at issue was the tax-free exchanges of real estate. The Federal Circuit concluded that the real estate investment tool designed to enable tax-free exchanges was an abstract concept. 671 F.3d at 1323, 101 USPQ2d at 1789. Examiner notes that the claimed invention is similar to the abstract idea found within Fort Properties in that the system is processing information in the form of registry commands based on the “master agreement” in the form of the consent record stored within the registry.
An example of a claim reciting business relations is found in Credit Acceptance Corp. v. Westlake Services, 859 F.3d 1044, 123 USPQ2d 1100 (Fed. Cir. 2017). The business relation at issue in Credit Acceptance is the relationship between a customer and dealer when processing a credit application to purchase a vehicle. The patentee claimed a "system for maintaining a database of information about the items in a dealer’s inventory, obtaining financial information about a customer from a user, combining these two sources of information to create a financing package for each of the inventoried items, and presenting the financing packages to the user." 859 F.3d at 1054, 123 USPQ2d at 1108. The Federal Circuit described the claims as directed to the abstract idea of "processing an application for financing a loan" and found "no meaningful distinction between this type of financial industry practice" and the concept of intermediated settlement in Alice or the hedging concept in Bilski. 859 F.3d at 1054, 123 USPQ2d at 1108.
Examiner notes that the claimed invention is similar to the abstract idea in Credit Acceptance Corp., in that the system is processing information asset information to find similar assets based on the processing.
Additionally, Examiner notes that the claims contain language directed to “extracting a portion of the first asset using a first machine learning model, wherein the portion of the first asset comprises at least one of: a foreground, a background, or a subject; and determining a vector embedding of the portion of the first asset” and using a “machine learning”, which amounts to, under the broadest reasonable interpretation, the system requires specific mathematical calculations (using machine learning models). “Generative models (also referred to as generative artificial intelligence models, generative Al models, generative machine learning models, generative ML models, or simply "models" herein) can be used to generate assets such as audio, images, video, 3D models, written works, and so forth” (See at least Specification ¶ 38), “using a machine learning model to verify the provided assets, for example to compare the provided assets to other assets, such as assets previously submitted to the platform and/or other assets accessible by the platform” (see at least Specification ¶ 55) and therefore encompasses mathematical concepts. “For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A, Prong One to make the analysis clear on the record.” MPEP 2106.04, subsection II.B. Under such circumstances, however, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Here, the claimed invention falls within the mental process/certain method of organizing human activity grouping of abstract ideas, and steps fall within the mathematical concepts grouping of abstract ideas. The limitations are considered together as a single abstract idea for further analysis. (Step 2A, Prong One: YES).
Examiner directs applicant to Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023), in which the Court decided in a case with a more specific use of machine learning than instant application that the claims involving a trained model updating upon new information amounts to merely applying the known computer elements as a tool to implement the method. The claimed invention is similar to the claims found within Recentive in that the model is being trained to output information and the instant application is similar to the identified use of “training” a model. No improvement to the overall method of machine learning or algorithm is presented. The mere use of machine learning as a tool to update a document does not render the claims patentable subject matter. “Considering the focus of the disputed claims, Alice, 573 U.S. at 217, it is clear that they are directed to ineligible, abstract subject matter. Recentive has repeatedly conceded that it is not claiming machine learning itself. See Appellant’s Br. 45; Transcript at 26:14–15. Both sets of patents rely on the use of generic machine learning technology in carrying out the claimed methods for generating event schedules and network maps. See, e.g., ’367 patent, col. 6 ll. 1–5, col. 11–12; ’811 patent, col. 3, l. 23, col. 5 l. 4. The machine learning technology described in the patents is conventional, as the patents’ specifications demonstrate. See, e.g., ’367 patent, col. 6 ll. 1–5 (requiring “any suitable machine learning technology . . . such as, for example: a gradient boosted random forest, a regression, a neural network, a decision tree, a support vector machine, a Bayesian network, [or] other type of technique”); ’811 patent, col. 3 l. 23 (requiring the application of “any suitable machine learning technique.”).” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)).
“Instead of disclosing “a specific implementation of a solution to a problem in the software arts,” Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1339 (Fed. Cir. 2016), or “a specific means or method that solves a problem in an existing technological process,” Koninklijke, 942 F.3d at 1150, the only thing the claims disclose about the use of machine learning is that machine learning is used in a new environment.” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)).
“the claimed methods are not rendered patent eligible by the fact that (using existing machine learning technology) they perform a task previously undertaken by humans with greater speed and efficiency than could previously be achieved. We have consistently held, in the context of computer-assisted methods, that such claims are not made patent eligible under § 101 simply because they speed up human activity. See, e.g., Content Extraction, 776 F.3d at 1347; DealerTrack, 674 F.3d at 1333. Whether the issue is raised at step one or step two, the increased speed and efficiency resulting from use of computers (with no improved computer techniques) do not themselves create eligibility. See, e.g., Trinity Info Media, LLC v. Covalent, Inc., 72 F.4th 1355, 1363 (Fed. Cir. 2023) (rejecting argument that “humans could not mentally engage in the ‘same claimed process’ because they could not perform ‘nanosecond comparisons’ and aggregate ‘result values with huge numbers of polls and members’”) (internal citation omitted); Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1365 (Fed. Cir. 2020) (holding claims abstract where “[t]he only improvements identified in the specification are generic speed and efficiency improvements inherent in applying the use of a computer to any task”); compare McRo, 837 F.3d at 1314-16 (finding eligibility of claims to use specific computer techniques different from those humans use on their own to produce natural-seeming lip motion for speech)” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)).
Similar to Recentive “nothing in the claims, whether considered individually or in their ordered combination, that would transform the Machine Learning Training and Network Map patents into something “significantly more” than the abstract idea of generating event schedules and network maps through the application of machine learning. See SAP Am., 898 F.3d at 1169–70; Broadband iTV, 113 F.4th at 1372. Recentive has also failed to identify any allegation in its complaint that would suffice to plausibly allege an inventive concept to defeat Fox’s motion to dismiss. Trinity, 72 F.4th at 1365” (Recentive Analytics vs. Fox Corp., 692 F.Supp.3d 438 (D. Del. 2023)). Examiner notes that the claims are similar to the identified unpatentable subject matter in that the system is merely applying standard machine learning elements in the image processing.
The conclusion that the claim recites an abstract idea within the groupings of the MPEP 2106.04(a)(2) remains grounded in the broadest reasonable interpretation consistent with the description of the invention in the specification. For example, [App. Spec 53], “the platform can enable tracking of assets used in the creation of generated assets using generative models”. Accordingly, the Examiner submits claims 1 and 11, recite an abstract idea based on the language identified in claims 1-20, and the abstract ideas previously identified based on that language that remains consistent with the groupings of Step 2A Prong 1 of the MPEP 2106.04(a)(1).
The limitations in reciting “extracting a portion of the first asset using a first machine learning model” which provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The recitation of “” and in the limitations also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional element “extracting a portion of the first asset using a first machine learning model” this type of limitation merely confines the use of the abstract idea to a particular technological environment (modeling) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Examiner notes that trained model addition, under the broadest reasonable interpretation the system requires specific mathematical calculations (using a trained model) (see at least Specification paragraph 55). “For example, in a claim that includes a series of steps that recite mental steps as well as a mathematical calculation, an examiner should identify the claim as reciting both a mental process and a mathematical concept for Step 2A, Prong One to make the analysis clear on the record.” MPEP 2106.04, subsection II.B. Under such circumstances, however, the Supreme Court has treated such claims in the same manner as claims reciting a single judicial exception. Id. (discussing Bilski v. Kappos, 561 U.S. 593 (2010)). Here, the claimed invention falls within the mental process/certain method of organizing human activity grouping of abstract ideas, and steps (b) and (c) fall within the mathematical concepts grouping of abstract ideas. The limitations are considered together as a single abstract idea for further analysis. (Step 2A, Prong One: YES).
If the claims are directed toward the judicial exception of an abstract idea, it must then be determined under Step 2A Prong 2 whether the judicial exception is integrated into a practical application. Examiner notes that considerations under Step 2A Prong 2 comprise most the consideration previously evaluated in the context of Step 2B. The Examiner submits that the considerations discussed previously determined that the claim does not recite “significantly more” at Step 2B would be evaluated the same under Step 2A Prong 1 and result in the determination that the claim does not integrate the abstract idea into a practical application.
The instant application fails to integrate the judicial exception into a practical application because the instant application merely recites words “apply it” (or an equivalent) with the judicial exception or merely includes instructions to implement an abstract idea. The instant application is directed to a method instructing the reader to implement the identified method of organizing human activity of legal interactions, business interactions, and risk management (i.e., compare assets) on generically claimed computer structure. For instance, the additional elements or combination of elements other than the abstract idea itself include the elements such as “models” recited at a high level of generality. These elements do not themselves amount to an improvement to the interface or computer, to a technology or another technical field. This is consistent with Applicant’s disclosure which states that the computing device “the computer system 2602 can be in communication via a plurality of networks. While Figure 26 illustrates an implementation of a computing system 2602, it is recognized that the functionality provided for in the components and modules of computer system 2602 may be combined into fewer components and modules, or further separated into additional components and modules. [0192] The computer system 2602 can comprise a module 2614 that carries out the functions, methods, acts, and/or processes described herein. The module 2614 is executed on the computer system 2602 by a central processing unit 2606 discussed further below.”. (App. Spec. 191-192).
Accordingly, the claimed “system” read in light of the specification employs any wide range of possible devices comprising a number of components that are “well-known” and included in an indiscriminate “machine learning”, “model”, “processor”, and “computer” (e.g., processing device, modules). Thus, the claimed structure amounts to appending generic computer elements to abstract idea comprising the body of the claim. The computing elements are only involved at a general, high level, and do not have the particular role within any of the functions but to be a computer-implemented method using a generically claimed “machine learning”, “model”, and “computer” and even basic, generic recitations that imply use of the computer such as storing information via servers would add little if anything to the abstract idea.
Similarly, reciting the abstract idea as software functions used to program a generic computer is not significant or meaningful: generic computers are programmed with software to perform various functions every day. A programmed generic computer is not a particular machine and by itself does not amount to an inventive concept because, as discussed in MPEP 2106.05(a), adding the words “apply it” (or an equivalent) with the judicial exception, or more instructions to implement an abstract idea on a computer, as discussed in Alice, 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)), is not enough to integrate the exception into a practical application. Further, it is not relevant that a human may perform a task differently from a computer. It is necessarily true that a human might apply an abstract idea in a different manner from a computer. What matters is the application, “stating an abstract idea while adding the words ‘apply it with a computer’” will not render an abstract idea non-abstract. Tranxition v. Lenovo, Nos. 2015-1907, -1941, -1958 (Fed. Cir. Nov. 16, 2016), slip op. at 7-8.
Here, the instructions entirely comprise the abstract idea, leaving little if any aspects of the claim for further consideration under Step 2A Prong 2. In short, the role of the generic computing elements recited in claims 1 and 11, is the same as the role of the computer in the claims considered by the Supreme Court in Alice, and the claim as whole amounts merely to an instruction to apply the abstract idea on the generic computerised system. Therefore, the claims have failed to integrate a practical application (2106.04(d)). Under the MPEP 2106.05, this supports the conclusion that the claim is directed to an abstract idea, and the analysis proceeds to Step 2B.
While many considerations in Step 2A need not be reevaluated in Step 2B because the outcome will be the same. Here, on the basis of the additional elements other than the abstract idea, considered individually and in combination as discussed above, the Examiner respectfully submits that the claims 1-20, does not contain any additional elements that individually or as an ordered combination amount to an inventive concept and the claims are ineligible.
With respect to the dependent claims do not recite anything that is found to render the abstract idea as being transformed into a patent eligible invention. The dependent claims are merely reciting further embellishments of the abstract idea and do not claim anything that amounts to significantly more than the abstract idea itself.
Claims 2-10, and 12-20, are directed to further embellishments of the abstract idea in that they are directed to aspects of the central theme of the abstract idea identified above, as well as being directed to data processing and transmission which the courts have recognized as insignificant extra-solution activities (see at least M.P.E.P. 2106.05(g)). Data transmission is one of the most basic and fundamental uses there are for a generic computing device is not sufficient to amount to significantly more. The examiner takes the position that simply appending the judicial exception with such a well understood step of data transmission is not going to amount to significantly more than the abstract idea.
Therefore, since there are no limitations in the claim that transform the abstract idea into a patent eligible application such that the claim amounts to significantly more than the abstract idea itself, the claims are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject matter. See MPEP 2106.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
Determining the scope and contents of the prior art.
Ascertaining the differences between the prior art and the claims at issue.
Resolving the level of ordinary skill in the pertinent art.
Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-6, 8-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 20240419907 to Laprise et al. (hereinafter Laprise) in view of U.S. Patent No. 11861528 B1 to Brager et al. (hereinafter Brager).
Referring to Claims 1 and 11 (substantially similar in scope and language), Laprise computer-implemented method for lineage tracking of assets (see at least Laprise: Abstract), the computer- implemented method comprising:
accessing a request for lineage tracking, the request comprising information relating to a first asset (see at least Laprise: ¶ 207 “a data processing pipeline may receive input data (e.g., imaging data 1308) in a specific format in response to an inference request (e.g., a request from a user of deployment system 1306). In at least one embodiment, input data may be representative of one or more images, video, and/or other data representations generated by one or more imaging devices”; see at least Laprise: ¶ 210 “In at least one embodiment, completed and validated applications or containers may be stored in a container registry and associated machine learning models may be stored in model registry 1324. In at least one embodiment, a requesting entity—who provides an inference or image processing request—may browse a container registry and/or model registry 1324 for an application, container, dataset, machine learning model, etc., select a desired combination of elements for inclusion in data processing pipeline, and submit an imaging processing request. In at least one embodiment, a request may include input data (and associated patient data, in some examples) that is necessary to perform a request, and/or may include a selection of application(s) and/or machine learning models to be executed in processing a request”; see also Laprise: ¶ 138 “a content application 624 executing on a cloud server 620 (e.g., a cloud server or edge server) may initiate a session associated with at least one client device 602, as may use a session manager and user data stored in a user database 636, and can cause content such as one or more digital assets (e.g., implicit and/or explicit object representations or maps) from an asset repository 634 to be determined by a content manager 626. A content manager 626 may work with a trained language module 628 to generate text-based representations of an environment based upon several types of input data. In at least one embodiment, these text-based representations can be provided to a mapping module 630 in generating mapping data for, or with respect to, the generated virtual environment or representation.”);
Examiner notes that Laprise goes into further detail of one embodiment stating that the system accessing, processes, and displays asset information, but fails to state regarding requesting lineage tracking (defined in the specification as being an infringement analysis, see Specification ¶ 88).
However, Brager, which talks about a method and system for generating intellectual property information based on submitted proprietary objects, teaches a method and system for processing asset information wherein the system uses vector, hash values, and machine learning techniques to process information upon request regarding asset infringement potential and instances (see also Brager: Col. 21 Line 45-67 and Cols. 22-23 “The shape fitting process is illustrated and described in more detail below with reference to FIG. 2A, but briefly can include attempting to fit the images to one another and outputting a value, vector, embedding distance, and/or other data that can indicate whether the augmented product image 110 is a tight fit (e.g., closely matches the augmented patent drawing 122 in such a way as to be confusingly similar to an ordinary observer).” and “The confirmation analysis may also be achieved by yet another neural network 124. A neural network 124 trained on e.g., bona fide and counterfeit sales listings may be used to learn counterfeit clues and to output a vector, value and/or embedding loss that predicts the risk that the listing is unauthorized to host the refined infringement prediction, enabling confirmation of infringement.”; see also Laprise: ¶ 89 “The perception-based embedding can be compared against a stored embedding for that intersection, or may be compared against embeddings for similar intersections identified through a similarity-based search”).
Therefore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of using machine learning models to generate content, filter information, and creating and applying vectors to intellectual property data in order to identify potential infringement (as disclosed by Brager) to the known system accessing, processes, and displays asset information (as disclosed by the Laprise) to obtain and process legal information. One of ordinary skill in the art would have been motivated to apply the known technique of using machine learning models to generate content, filter information and creating and applying vectors to intellectual property data because it would obtain and process legal asset information (see Brager: Abstract).
Furthermore, it would have been obvious to one of ordinary skill in the art at the time of filing to apply the known technique of using machine learning models to generate content, filter information, and creating and applying vectors to intellectual property data in order to identify potential infringement (as disclosed by Brager) to the known system accessing, processes, and displays asset information (as disclosed by the Laprise) to obtain and process legal information, because the claimed invention is merely applying a known technique to a known method ready for improvement to yield predictable results. See KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 406 (2007). In other words, all of the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results to one of ordinary skill in the art at the time of the invention (i.e., predictable results are obtained by applying the known technique of using machine learning models to generate content, filter information, and creating and applying vectors to intellectual property data in order to identify potential infringement to the known <> to obtain and process legal information). See also MPEP § 2143(I)(D).
Examiner notes that the combination of Laprise and Brager teaches:
determining one or more first metadata items of the first asset (see at least Laprise: ¶ 138 “a content application 624 executing on a cloud server 620 (e.g., a cloud server or edge server) may initiate a session associated with at least one client device 602, as may use a session manager and user data stored in a user database 636, and can cause content such as one or more digital assets (e.g., implicit and/or explicit object representations or maps) from an asset repository 634 to be determined by a content manager 626. A content manager 626 may work with a trained language module 628 to generate text-based representations of an environment based upon several types of input data. In at least one embodiment, these text-based representations can be provided to a mapping module 630 in generating mapping data for, or with respect to, the generated virtual environment or representation.”; see also Laprise: ¶ 202 “model registry 1324 may be backed by object storage that may support versioning and object metadata”; see also Laprise: ¶ 262; see also Laprise: ¶ 76 “it would be necessary for a human annotator to review the map representation (or other representation) and apply the necessary labels or annotations, such as to label this as an intersection and to apply the relevant rules. While an attribute-based search might be used (or a search based on keyword matching), differences and variations such as those discussed can prevent there being an exact match between the intersections”);
wherein determining the first vector embedding comprises: extracting a portion of the first asset using a first machine learning model, wherein the portion of the first asset comprises at least one of: a foreground, a background, or a subject (see at least Laprise: ¶ 72 “one or more machine learning models can be trained and used to provide information about the semantics, topology, and geometry (or other such aspects) of an object or environment. This can include the use of one or more language models that can take in various types of input and output a textual description of, or textual content for, any of these aspects. In some embodiments, the raw sensor data can be provided as input, while in other embodiments there may be at least some amount of pre-processing, such as to determine bounding boxes around objects and extract the relevant image data, or perform basic object classification based on operations such as computer vision-based analysis, among other such options”; see also Laprise: ¶ 113 “Semantic information may include, for example and without limitation, a landmark type, a landmark association (e.g., a traffic light's associated lane IDs, lane boundary segment locations), and/or textual information (e.g., text displayed on signs).”; see also Laprise: ¶ 114 “An LLM 516 trained with an RTL (or other DSL) corpus built from a database 512 of map ground truth data can be queried to correct features output from a machine learning (ML) automation pipeline. The output of an LLM 516—such as by using a writer and/or parser component or module 520—can be mapped back to the extracted features. A difference (e.g., diff) operation can then be performed with respect to inferred landmarks from an automation component 518, for example, to perform any appropriate corrections to generate a map graph 522. An example use case is to infer the road topology (e.g., edges) from an incomplete set of nodes (e.g., landmarks) with potential applications in, for example and without limitation, tooling, quality assurance (QA), and automation. In some embodiments, document embeddings may be indexed in a vector database, or n-dimensional latent space (where n can represent a number of extracted features or feature types), and the index can then be used to cluster similar intersections—thus allowing the unsupervised labeling and retrieval of operation design domains (ODDs) (e.g., features or landmarks).”; see also Laprise: ¶ 135 “The set of observations can correspond to raw sensor data, features extracted from the raw sensor data, a set of feature vectors or embeddings, or an object map, among other such options. The language model can be a generative model that was trained using spatial and semantic information for various environments, and is able to infer relationships between various objects or features in an environment. A text string can be generated and received 584 as output from the language model. The text string in this example can be a single, tokenized text string that comprises a tokenized description of the environment determine in part upon aspects and relationships determined or inferred for the set of observations, including semantic, geometric, and topological aspects of those observations.”; see at least Laprise: ¶ 205-206 and 227-228; see also Laprise: ¶ 39); and
determining a vector embedding of the portion of the first asset (see at least Laprise: ¶ 34-37 “The availability of such embeddings, feature vectors, latent points, or other such representations also allows for similarity-based querying. This can include querying of map-related data, where sensor data (or other observations or data representative of a location) can be fed to a language model that can generate an embedding for a new or current ODD, and this embedding can be used to locate information for similar ODDs by locating similar or proximate embeddings in, for example, the latent space. In at least this context, an embedding can be from a single token (or other discrete object or component that may be comprised of text and/or other data), sentence, paragraph, document, graph, or other such format. An embedding can be generated from other sources as well, such as image patches and audio clips, among other such options”, 39 “The generative AI model might take the sensor data directly as input or might receive input that is generated from the sensor data in one or more stages of a pipeline, such as stages to extract features and generate embeddings of those features in a latent space that can be provided as input to the generative model.”, 50 “a language model could take as input an object-based representation 110 and generate what is essentially a tokenized text string representation of the object graph 112. In other embodiments, the language model might be able to take other inputs that would allow for at least some steps in this generation pipeline to be eliminated as separate steps performed by separate processes or components. For example, an LLM could be trained to take in a set of determined aspects 108 (e.g., semantics, topology, or geometry information for an environment or objects in that environment) in text format and generate a tokenized text string representative of the object graph 112 without ever having to generate an object-based representation. Similarly, in some embodiments an LLM can take as input the internal representation 106, or even the sensor data 104, without the need for separate intermediate representations. For example, a model (as part of the LLM or a separate model) can analyze the sensor data 104 for the environment and encode features of the sensor data into a latent space (or other embedding). The LLM can then take a feature vector as input that is a function of these individual latent space encodings, and can directly generate the tokenized text string representation of the environment. The features extracted can include semantic, relationship, and geometry features, among other