Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
Regarding claim 9, the examiner recognizes a delta update in the context of software as only including the changes or differences between the current version and the new version. It is considered a well-understood, routine, conventional activity in the field. In the context of the invention, the examiner further interprets a delta update as a change to the first dataset and requiring a data integrity check [0197]. The examiner will examine this claim accordingly.
Claim Rejections – 35 U.S.C. § 101
3. 35 U.S.C. § 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
4. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter. The claims, 1-20 are directed to a judicial exception (i.e., law of nature, natural phenomenon, abstract idea) without providing significantly more.
Step 1
Step 1 of the subject matter eligibility analysis per MPEP § 2106.03, required the claims to be a process, machine, manufacture or a composition of matter. Claims 1-20 are directed to a process (method), and a machine (system), which are statutory categories of invention.
Step 2A
Claims 1-20 are directed to abstract ideas, as explained below.
Prong one of the Step 2A analysis requires identifying the specific limitation(s) in the claim under examination that the examiner believes recites an abstract idea, and determining whether the identified limitation(s) falls within at least one of the groupings of abstract ideas of mathematical concepts, mental processes, and certain methods of organizing human activity.
Step 2A-Prong 1
The claims recite the following limitations that are directed to abstract ideas, which can be summarized as being directed to a method, the abstract idea, of determining the value of a dataset of an entity.
Claim 1 discloses:
A method for analyzing metadata received from an agent related to a first dataset comprising:
establishing a connection to the agent; (observation, evaluation, judgement, opinion, mitigating risk)
receiving metadata related to the first dataset, wherein the agent is configured to access the first dataset to generate the metadata related to the first dataset, the metadata comprising a plurality of attributes of the first dataset and a summary of the first dataset; (observation, evaluation, judgement, opinion, mitigating risk)
applying, a valuation model to the received metadata, (observation, evaluation, judgement, opinion, mitigating risk),
wherein the valuation model using marketplace data, the marketplace data comprising sales prices of one or more datasets and attributes of one or more datasets, wherein the attributes are used to provide inputs to the model, and wherein the model is trained to output the sales prices of the one or more datasets; (observation, evaluation, judgement, opinion, mitigating risk) and
determining, based on the applying the valuation model, an estimated value of the first dataset, (observation, evaluation, judgement, opinion, mitigating risk).
Additional limitations employ the method to summarize the number of records in the dataset, completeness of the dataset, uniqueness of records in the dataset, a growth rate of the dataset and average number of records with each primary key identified in the dataset or an age of the dataset, (observation, evaluation, judgement, opinion – claim 2), receiving product ideas from the marketplace, each idea comprising product attributes, (observation, evaluation, judgement, opinion – claim 3), determining similar attributes in the first dataset with the attributes of the marketplace, correlating attributes, recommending attributes to add to the dataset, estimating values of adding attributes to the dataset, and providing recommended attributes and values to a client, (observation, evaluation, judgement, opinion, mitigating risk – claim 4), receiving attributes from a marketplace associated with dataset sales, receiving buyer identifiers associated with the dataset sales, determining similar attributes of the dataset and the attributes of the marketplace associated with the dataset sales, determining based on similar attributes and buyer identifiers one or more recommended buyers of the dataset and providing the buyers to a client, (observation, evaluation, judgement, opinion, mitigating risk – claim 5), receiving attributes from a marketplace associated with sales, determining similar attributes and marketplace attributes associated with sales, grouping attributes into categories, determining high significance categories, determining frequencies of similar attributes, removing high frequency attributes of the similar attributes, removing zero frequency attributes of the similar attributes, identifying scarce attributes having greater than zero frequency and less than high frequency, (observation, evaluation, judgement, opinion, mitigating risk – claim 6), using the attributes of the first dataset and generating a description for each attribute in the dataset, sorting the attributes and descriptions, to enable a buyer to be able to answer questions, (observation, evaluation, judgement, opinion – claim 7), receiving and storing an encrypted copy of the dataset, (observation, evaluation, judgement, opinion – claim 8), receiving an update to the dataset including a delta update, (observation, evaluation, judgement, opinion, mitigating risk – claim 9), receiving an encrypted copy of the dataset, storing the dataset, and an encryption key stored in a different location, (observation, evaluation, judgement, opinion, mitigating risk – claim 10).
Each of these claimed limitations employ mental processes involving observation, evaluation, judgement, and opinion as well as organizing human activity - fundamental economic principles based on mitigating risk.
Claims 11-20 recite similar abstract ideas as those identified with respect to claims 1-10.
Thus, the concepts set forth in claims 1-20 recite abstract ideas.
Step 2A-Prong 2
As per MPEP § 2106.04, while the claims 1-20 recite additional limitations which are hardware or software elements such as an electronic agent-based computer system, an electronic agent installed on a client computing system, a machine learning model, a secure electronic network connection, dynamically generating the metadata, a machine learning model that is trained using marketplace data, a large language model, a database, an encryption key, a data store, a processor, and a non-volatile memory, these limitations are not sufficient to qualify as a practical application being recited in the claims along with the abstract ideas since these elements are invoked as tools to apply the instructions of the abstract ideas in a specific technological environment. The mere application of an abstract idea in a particular technological environment and merely limiting the use of an abstract idea to a particular technological field do not integrate an abstract idea into a practical application (MPEP § 2106.05 (f) & (h)).
Evaluated individually, the additional elements do not integrate the identified abstract ideas into a practical application. Evaluating the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually.
The claims do not amount to a “practical application” of the abstract idea because they neither (1) recite any improvements to another technology or technical field; (2) recite any improvements to the functioning of the computer itself; (3) apply the judicial exception with, or by use of, a particular machine; (4) effect a transformation or reduction of a particular article to a different state or thing; (5) provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment.
Accordingly, claims 1-20 are directed to abstract ideas.
Step 2B
Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as an ordered combination, do not amount to significantly more than the abstract idea.
The analysis above describes how the claims recite the additional elements beyond those identified above as being directed to an abstract idea, as well as why identified judicial exception(s) are not integrated into a practical application. These findings are hereby incorporated into the analysis of the additional elements when considered both individually and in combination.
For the reasons provided in the analysis in Step 2A, Prong 1, evaluated individually, the additional elements do not amount to significantly more than a judicial exception. Thus, taken alone, the additional elements do not amount to significantly more than a judicial exception.
Evaluating the claim limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. In addition to the factors discussed regarding Step 2A, prong two, there is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely amount to instructions to implement the identified abstract ideas on a computer.
Therefore, since there are no limitations in the claims 1-20 that transform the exception into a patent eligible application such that the claims amount to significantly more than the exception itself, the claims are directed to non-statutory subject matter and are rejected under 35 U.S.C. § 101.
Claim Rejections 35 U.S.C. §103
5. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. Claim 1-4, 7-9, 11-14, and 7-19 are rejected under 35 U.S.C. § 103 as being taught by Sodhi,
(US 20220197914 A1), hereafter Sodhi, “Ranking Datasets Based on Data Attributes,” in view of Wang,
(US 20230015813 A1), hereafter Wang, “Data-Sharing Systems and Methods which use Multi-Angle
Incentive Allocation,” in further view of Crabtree, (US 20210112101 A1), hereafter Crabtree, “Data Set
and Algorithm Validation, Bias Characterization, and Valuation.”
Regarding claim 1, an electronic agent-based computer system method for analyzing metadata Sodhi teaches, (the computer determines candidate datasets having a field suitability value that exceeds a predetermined suitability threshold value, and the field suitability value represents a degree of similarity between a set of fields associated with said dataset and the set of target data fields. The computer assesses the associated metadata set for each candidate dataset, with regard to the target attributes, [0004]), received from an electronic agent operating on a client computing system related to a first dataset accessible by the client computing system, using a machine learning model, (in some domains, datasets are produced by user systems as a byproduct of system operation and kept future use. In other domains, datasets, especially large or customized datasets, may be provided by third parties at an expense to the user. Artificial Intelligence (AI) systems can identify patterns within data contained in datasets to reveal trends that are often difficult to predict in other ways. Since datasets can vary widely in terms of content, some datasets will be more useful to certain users than others, [0002]), comprising:
establishing, by a computing system, a secure electronic network connection, Sodhi does not teach, Wang teaches, (to facilitate data sharing, a computing system is used to link data providers and data consumers and to provide a mechanism such as a platform using necessary technologies for smoothly and securely transferring data received from data providers to data consumers, [0005]),
Sodhi and Wang are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the secure connectivity of Wang to ensure long-term sustainable data sharing, Wang, [0005],
to the electronic agent running on the client computing system, Sodhi teaches, (in the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider), [0064]).
receiving, by the computing system, from the electronic agent operating on the client computing system via the secure electronic network connection, metadata related to the first dataset, wherein the electronic agent is configured to access the first dataset to dynamically generate the metadata related to the first dataset, the metadata comprising a plurality of attributes of the first dataset and a summary of the first dataset; Sodhi teaches, (the computer generates a group of metadata sets for an associated plurality of datasets. The computer determines candidate datasets having a field suitability value that exceeds a predetermined suitability threshold value, and the field suitability value represents a degree of similarity between a set of fields associated with said dataset and the set of target data fields. The computer assesses the associated metadata set for each candidate dataset, with regard to the target attributes. The computer generates a compared attribute score for each candidate dataset that indicates a degree of likelihood that an associated dataset will have content exhibiting said target dataset attributes, [0004]),
applying, by the computing system, a valuation model to the received metadata, wherein the valuation model comprises a machine learning model Sodhi does not teach, Wang teaches, (the data-sharing system disclosed herein comprises a system-level software structure and method for incentive allocation with a number of functional modules, which surpasses the existing solutions in the ability to include more advanced and comprehensive aspects of data valuation of data provided by data providers, and smartly makes use of machine learning or Artificial Intelligence (AI) to reflect the value of the data of data providers for specific real-world tasks, [0008]), that is trained using marketplace data, the marketplace data comprising sales prices of one or more datasets and attributes of one or more datasets, wherein the attributes are used to provide inputs to the machine learning model, Sodhi does not teach, Wang teaches, (with the selection of the AI model performance comparison route from the routing function (described later), the AI engine 214 (see FIG. 4) uses a training function 380 to train the AI model 228 using each of the filtered training datasets 332 obtained from the data-quality assurance module 322 and a machine learning algorithm, such as supervised or semi-supervised learning, to generate a plurality of trained AI models each corresponding to a filtered training dataset 332. Then, an inference function 384 of the data-valuation module 324 starts the valuation computation process for testing each of the trained AI models using the test datasets 342 and computing an evaluation value 368 for each trained AI model(which represents the value of the corresponding filtered training dataset 332). An efficiency-improved method based on the most important subset of the input dataset may be used to speed up the valuation computation. The obtained evaluation values 368 are combined by the overall evaluation function 366 for generating data value ranking results for the datasets 332 which are then output from the data-valuation module 324 to the rank-to-unit-value mapping module 326 for generating rank results 386 for the filtered training datasets 332, which are the used by the unit-value mapping function 388 to generate an incentive-split scheme (such as the estimated unit values of the datasets 332, that is, the estimated value of a unit of data of each dataset [0153]),
Sodhi and Wang are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the advanced valuation of Wang to apply an efficiency-improved method based on the most important subset of the input dataset used to speed up the valuation computation, Wang, [0153],
and wherein the machine learning model is trained to output the sales prices of the one or more datasets; Sodhi does not teach, Crabtree teaches, (the value score is used as a pricing schedule for data set monetization, [0008]), and determining, by the computing system based on the applying the valuation model, an estimated value of the first dataset. Sodhi teaches, (the compared attribute scores are based, at least in part on an associated desirability value associated with each of said target dataset attributes, [0005]).
Sodhi and Crabtree are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the monetization techniques of Crabtree to enable entities and people to achieve data set validation to facilitate data set and algorithm bias certification and scoring, Crabtree, [0004].
Regarding claim 2, the method of Claim 1, wherein the summary comprises at least one of a
number of records in the first dataset, a completeness of the first dataset, a uniqueness of records in the first dataset, a growth rate of records in the first dataset, an average number of records associated with each of a plurality of primary keys identified in the first dataset, or an age of the first dataset. Sodhi teaches, (the value of a given dataset can be based on a variety of factors, including dataset record field content and the scope of the information contained, [ ] and datasets with higher amounts of suitable information (e.g., the higher number of desired data fields) are preferred, [0008], and assessment of metadata for two comparison datasets may show that dataset has content exhibiting the target dataset attributes (e.g., “dataset value range” and “dataset completeness”), [0040]).
Regarding claim 3, The method of Claim 1, further comprising:
receiving, by the computing system, a plurality of product ideas from a marketplace, each product idea comprising a plurality of product attributes; Sodhi teaches, (since datasets having content with some attributes may be more useful than others, aspects of the system (including, e.g., user data requirement questionnaires and other data use documents) help us identify what a user needs as content. Aspects of the invention identifies relevant contexts of data, indicating which datasets would be a good match for various user goals, [0016],
determining, by the computing system, one or more similar attributes in the
attributes of the first dataset and the pluralities of product attributes; (derived metadata may, for example, include statistical properties of the data such as the type of distribution contained, mean values, data value variance. Derived metadata may also include indications of time series data and other similar and a variety of other data field correlations. According to aspects of the invention, derived metadata can also include information (as derived through known data mining techniques) about how the data has been used previously and what other inquiries it has supported. Derived metadata may also provide (when affirmative original content provider consent is confirmed) personally identifiable information that can be useful for marketing of products and opening new marketing channels or otherwise allowable, in accordance with the confirmed consent provided, [0037]),
determining, by the computing system, one or more product recommendations
based on the determined one or more similar attributes; and, (the server computer 102 includes Compared Attribute Assessment Module (CAAM) 120 that compares the dataset metadata of the comparison datasets identified by the (DFSAM) 118, to generate a compared attribute score value (CASV) for each compared datasets that represents a degree of likelihood that each associated compared dataset has content exhibiting the target dataset attributes, [0039]),
providing, by the computing system, the one or more product recommendations
to a client, (the computer identifies a set of target dataset attributes from a set of data use documents, and the data use documents indicate data scope preferences for the user, [0004], FIG. 3B, example 320 illustrates a request for a certain kind of information (represented by a question or a group of questions arranged into a questionnaire, and other data requirements) is passed to a data value engine. Several datasets (e.g., “HR data”, “customer datasets”, and “Click analysis”) and associated dataset metadata are also provided to the data value engine. The data value engine processes the input, evaluates the provided datasets according to suitability, and provides a list of the datasets ranked according to the determined suitability, [0046]).
Regarding claim 4, The method of Claim 1, further comprising:
receiving, by the computing system, a plurality of attributes from a marketplace; Sodhi teaches, (the computer identifies a set of target dataset attributes from a set of data use documents, and the data use documents indicate data scope preferences for the user, [0004]),
determining, by the computing system, one or more similar attributes in the attributes of the first dataset and the plurality of attributes from the marketplace; (The server computer 102 includes Compared Attribute Assessment Module (CAAM) 120 that compares the dataset metadata of the comparison datasets identified by the (DFSAM) 118, to generate a compared attribute score value (CASV) for each compared datasets that represents a degree of likelihood that each associated compared dataset has content exhibiting the target dataset attributes, [0039],
determining, by the computing system, correlations between one of more attributes of the plurality of attributes from the marketplace and the one or more similar attributes; (the data value engine processes the input, evaluates the provided datasets according to suitability, and provides a list of the datasets ranked according to the determined suitability, [0046]).
determining, by the computing system based on the correlations, one or more
recommended attributes to add to the first dataset; (the data value engine provides ranked datasets, data value, and facet ranking as output. According to aspects of the invention, metadata includes information about the data of a given dataset (e.g., for instance, a domain associated with a given dataset). The metadata can be stored in different forms along with the data. In many object storage arrangements, data is stored as objects, and metadata is stored a key-value pairs associated with the data objects. Metadata is primarily identified (e.g., extracted with automated mechanisms, such as analysis algorithms or similar routines selected by one skilled in this field) from within the data itself or manually, with input from data experts that provide additional insight and information about the various data objects. According to aspects of the invention, score are numerical value that represent the relative importance of a facet (e.g., attribute or feature) in the metadata. Facets are a data property either derived through automation or added to the dataset as part of input provided by a domain expert, [0045]),
estimating, by the computing system, one or more values of adding one or more
recommended attributes to the first dataset; (in particular, business questionnaire information and business process information are passed from a business owner to an activity identification phase, in which target data facets and required system classes are identified. A business metric to data converter then provides required facets from business and data fields to a data field identifier, and a facet valuation is generated. The facet valuation is passed to a dataset valuation phase, in which a dataset ranker provides a ranking of the datasets, [0048]), and
providing, by the computing system, the one or more recommended attributes
and the one or more values to a client, (this information is then passed back to the business owner as output, [0048]).
Regarding claim 7, the method of Claim 1, further comprising:
providing, by the computing system to a large language model, the attributes of the first dataset; Sodhi teaches, (“Stage 2: Dataset Value Evaluation Engine,” in which various field requirements and desired dataset traits, including target fields identified in first stage and dataset target attributes
(e.g., dataset facets), identified in a third stage are compared using known NLP, machine learning comparisons, and other methods of computerized analysis, against sets of metadata that each describe provided datasets, [0047]),
generating, by the computing system using the large language model, a description for each of the attributes of the first dataset; (according to aspect aspects of the invention metadata includes descriptive information indicating content-based characteristics of the dataset. [ ] Aspects of the invention rank datasets and provide relevance scores based on attributes and range values for each of the metadata and its range of values. Aspects of the invention formulate and derive a systematic method for determining a value for the dataset based on the business requirements and the content of the data. Aspects of the invention use the score and derive a ranking for each facets of the metadata. Aspects of the invention use the dataset value to compare two data sets with respect to a business requirement, [0013]), storing, by the computing system, the attributes of the first dataset and the descriptions in a database, wherein in response to a query from a buyer, the computing system is configured to search the database for attributes that match the query and to provide results of the search to the buyer, (Aspects of the invention identify the data requirements of the businesses. [ ] Aspects of the invention enable a search mechanism of the data sets based on the content of the data. Aspects of the invention use the history of data usage in different contexts to generate metadata and use them to identify business context when search events are conducted. Aspects of the invention searches a corpus of data sets based on the input of a set of business requirements; and ranks the results in terms of those best matched for suitability, [0013]).
Regarding claim 8, The method of Claim 1, further comprising:
receiving, by the computing system from the agent installed on the client computing system of the client, a copy of the first dataset, wherein the copy is an encrypted copy of the first dataset; and
storing the copy of the first dataset. Sodhi does not teach, Crabtree teaches, (Regardless of network device configuration, the system of an aspect may employ one or more memories or memory modules configured to store data, program instructions for the general-purpose network operations, or other information relating to the functionality of the aspects described. [ ] Memory 16 or memories 11, 16 may also be configured to store data structures, configuration data, encryption data, historical system operations information, or any other specific or generic non-program information described herein.
Sodhi and Crabtree are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the encryption techniques of Crabtree to contribute and curate data to create a mutually beneficial store 410 of reliable and large data set collections, Crabtree, [0055].
Regarding claim 9, The method of Claim 8, further comprising:
receiving, by the computing system from the agent installed on the client computing system of the client, an update to the first dataset, Sodhi does not teach, Crabtree teaches, (the access portal 770 gives individual entities and people agency over their data which enhances the vitality of the market by means of more valuable data and data sets being compiled and made readily available. Changes made to data and data sets are persisted into the data store 710.The dataset valuation engine 740 can generate both a new value score and value score report for the user updated data set, [0070]),
Sodhi and Crabtree are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the data updating techniques of Crabtree to enable entities and people to achieve data set validation to facilitate data set and algorithm bias certification and scoring, Crabtree, [0004].
wherein the update to the first dataset includes a delta update. Sodhi does not teach, Wang teaches a data-quality assurance model, (the data-quality assurance module 322 comprises a plurality of sequentially coupled functions including a governance standard setting function 422, a quality assessment function 424, a quality assurance function 426,and a metric extraction function 428. The governance standard setting function 422 configures the data dictionary 442 which defines the meta-data glossaries, including data terms and the corresponding descriptors (including but not limited to the meaning, owner, format, bounds and range of minimum and maximum values) based on the task definitions 444 (received from data consumers 204). Moreover, rules 446 and 448 for structured and unstructured datasets, respectively, are set or otherwise defined to reflect the validation policy in a specific field. Such rules may comprise rules for missing data imputation, data-integrity violation criteria, detection of duplicates and purging inconsistent data, and detection of outliers (see Reference R12), Wang, [0156-0157]).
Sodhi and Wang are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the data integrity approach of Wang to provide improved effectiveness, Wang, [0007].
Claims 11-14, and 17-19 are rejected for reasons corresponding to those provided for Claims 1-4 and 7-9. In these claims, the addition of a system comprising a processor, and a non-volatile memory does not change the rational for the rejections under 35 U.S.C. § 103 or the referenced prior art (Sodhi teaches the program 1060 is executable by the processor 1020 of the computer system 1010, and memory and/or computer readable storage media includes non-volatile memory or non-volatile storage, [0052]).
7. Claims 5 and 15 are rejected under 35 U.S.C. § 103 as being taught by Sodhi, (US 20220197914
A1), hereafter Sodhi, “Ranking Datasets Based on Data Attributes,” in view of Wang, (US 20230015813
A1), hereafter Wang, “Data-Sharing Systems and Methods which use Multi-Angle Incentive Allocation,”
in further view of Crabtree, (US 20210112101 A1), hereafter Crabtree, “Data Set and Algorithm
Validation, Bias Characterization, and Valuation,” in further view of Selvanathan, (WO 20220292543 A1),
hereafter Selvanathan, “Data Brokerage and Valuation System and Method.”
Regarding claim 5, The method of Claim 1, further comprising:
receiving, by the computing system, a plurality of attributes from a marketplace associated with a plurality of dataset sales; Sodhi teaches, (this information provides input regarding business requirements, so that matching data content may be identified as such when encountered in various dataset. For example, if a user wants to gain insight about marketing a particular product in a given region, datasets that contain information about sales of that product in that region would likely be more valuable than datasets that only included sales information of that product in a different region. General sales information about the product might also be valuable to this user, and user data use requirement inquiries may be structured to gather this level of detail, [0036]),
receiving, by the computing system, a plurality of buyer identifiers associated with the plurality of dataset sales; Sodhi does not teach, Selvanathan teaches, (the data brokerage and valuation system (102) includes a profile module (202) configured to build profiles of data sellers and data buyers, based upon registration information of the data sellers and the data buyers. [p. 2]),
determining, by the computing system, one or more similar attributes of the first dataset and the plurality of attributes from the marketplace associated with the plurality of dataset sales;
determining, by the computing system based on the one or more similar attributes and the plurality of buyer identifiers, one or more recommended buyers of the first dataset; Sodhi does not teach, Selvanathan teaches, (The data brokerage system (102) further includes a match module (208) configured to match the buyers required data and the seller’s data using artificial intelligence, [p. 2], and
providing, by the computing system, the one or more recommended buyers to
a client, Sodhi does not teach, Selvanathan teaches, (In an embodiment, various entities such as data seller entity system (104) and data buyer entity system (108) may belong to any complex network system such as a global supply chain including (but not limited to) a manufacturer system, a producer system, a distributor system, and a user/consumer/client via a network (106).Further, the complex network may be a network of buyers and sellers belonging to different supply chains globally. The Network (106) may include, but is not restricted to, a communication network such as Internet, PSTN, Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), and so forth. In an embodiment, the network (106) can be a data network such as the Internet, [p. 4]).
Sodhi and Selvanathan are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the buying and selling capabilities of Selvanathan to provide brokerage and valuation service between the different data buyers and the data sellers, Selvanathan, [p. 5].
Claim 15 is rejected for reasons corresponding to those provided for Claim 5. In this claim, the addition of a system comprising a processor, and a non-volatile memory does not change the rational for the rejections under 35 U.S.C. § 103 or the referenced prior art (Sodhi teaches the program 1060 is executable by the processor 1020 of the computer system 1010, and memory and/or computer readable storage media includes non-volatile memory or non-volatile storage, [0052]).
8. Claims 10 and 20 are rejected under 35 U.S.C. § 103 as being taught by Sodhi, (US 20220197914 A1), hereafter Sodhi, “Ranking Datasets Based on Data Attributes,” in view of Wang, (US 20230015813 A1), hereafter Wang, “Data-Sharing Systems and Methods which use Multi-Angle Incentive Allocation,” in further view of Crabtree, (US 20210112101 A1), hereafter Crabtree, “Data Set and Algorithm Validation, Bias Characterization, and Valuation,” in further view of Henderson, (US 20220292543 A1), hereafter Henderson, “Pop-Up retail Franchising and Complex Economic System.”
Regarding claim 10, The method of Claim 1, further comprising:
receiving, by the computing system, from the electronic agent operating on the client computing system via the secure electronic network connection, an encrypted copy of the first dataset;
storing the encrypted copy of the first dataset in a first data store; Sodhi does not teach, Henderson teaches, (the instructions and/or data used in the practice of the invention may utilize any compression or encryption technique or algorithm, as may be desired. An encryption module might be used to encrypt data. Further, files or other data may be decrypted using a suitable decryption module, for example, [0223]),
Sodhi and Henderson are both considered to be analogous to the claimed invention because they are both in the field of dataset valuation. It would have been obvious to one of ordinary skill in the art before the effective filing date to combine the data valuation method of Sodhi with the data encryption of Henderson to provide broad utility and application, Henderson, [0228].
and
storing an encryption key of the encrypted copy in a second data store, the second data store different from the first data store.
The storing of an encryption key in a second data store is a design choice with no benefit to the utility of the invention. Regardless, Henderson teaches decrypted using a suitable decryption module, for example, [0223]). It would be a design choice to place the encryption key in a different data store and it would be obvious for one of ordinary skill in the art to rearrange parts of an invention, in this case choosing data store locations, a memory coupled to the processor for storing digital data, an application program stored in the memory and accessible by the processor for directing processing data by the processor, and one or more internal databases or cloud databases for storing a target market profile, datasets relating to georeferenced areas, and user registration criteria relating to a target market or constituency, Henderson [0004], (MPEP 2144.04 I, 2144.04 VI (C)).
Claim 20 is rejected for reasons corresponding to those provided for Claim 10. In this claim, the addition of a system comprising a processor, and a non-volatile memory does not change the rational for the rejections under 35 U.S.C. § 103 or the referenced prior art (Sodhi teaches the program 1060 is executable by the processor 1020 of the computer system 1010, and memory and/or computer readable storage media includes non-volatile memory or non-volatile storage, [0052]).
Conclusion
9. Claims 6 and 16 are not rejected by prior art under 35 U.S.C. § 103. The closest prior art to the invention is Sodhi, (US 20220197914 A1), “Ranking Datasets Based on Data Attributes.” None of the prior art alone or in combination teach the claimed invention as recited in this claim wherein the novelty is in the combination of all the limitations and not in a single limitation.
Regarding claims 6 and 16, the plurality of attributes from a marketplace, determining similar attributes, grouping similar attributes, and high significance categories, are taught by Sodhi. Determining the frequency of similar attributes, removing high frequency and zero frequency attributes to identify “scarce” attributes, is not taught, the closest prior art for this portion of the claim is Sodhi, scoring attributes by degree of similarity. These individually or in combination did not teach the complete scope of the claim.
10. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure or directed to the state of the art is listed on the enclosed PTO-892.
11. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL BOROWSKI whose telephone number is (703)756-1822. The examiner can normally be reached M-F 7:30 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571)272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MB/
Patent Examiner, Art Unit 3624
/MEHMET YESILDAG/Primary Examiner, Art Unit 3624