Prosecution Insights
Last updated: April 17, 2026
Application No. 18/112,862

APPARATUSES AND METHODS FOR GENERATING A COLLECTION DATASET

Non-Final OA §103
Filed
Feb 22, 2023
Examiner
JONES, COURTNEY PATRICE
Art Unit
3699
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
unknown
OA Round
5 (Non-Final)
67%
Grant Probability
Favorable
5-6
OA Rounds
3y 3m
To Grant
90%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
158 granted / 235 resolved
+15.2% vs TC avg
Strong +23% interview lift
Without
With
+23.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
37 currently pending
Career history
272
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
47.8%
+7.8% vs TC avg
§102
23.5%
-16.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 235 resolved cases

Office Action

§103
Acknowledgements This communication is in response to applicant’s response filed on 01/15/2026. Claims 6 and 16 have been cancelled. Claims 1 and 11 have been amended. Claims 1-5, 7-15, and 17-20 are pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/15/2026 has been entered. Response to Arguments Regarding applicant’s arguments: Applicant’s arguments see pgs. 7-9, filed 01/15/2026, with respect to the rejection(s) of claim(s) under Claim Rejections - 35 USC § 103 that the currently cited prior art does not teach, suggest, or motivate “receiving user feedback for the collection dataset, wherein the user feedback comprises ratings on the collection dataset and instructions for reclassifying the collection dataset received by a user interface” have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Dawson (US 20100083112). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-15, and 17-20 are rejected under 35 U.S.C. 103 as being anticipated by Quigley (US 20230196341) in view of Goldston (US 20220374503) in further view of De Leon (US 20240161123) in further view of Dawson (US 20100083112). Regarding Claims 1 and 11, Quigley teaches receive a plurality of data comprising: user data comprising at least an NFT; rights data; and record data comprising at least a self-executing record (Paragraphs 0772, 0788, 0808-0809, and 0812 teach referring to FIG. 31, a tokenization platform may include a mystery box system for deploying and implementing the minting, crafting, unboxing/unpacking, and other related functionalities; to configure various smart contracts that provide functionality for minting the tokens, storing the tokens on a blockchain or other distributed ledger, selling the tokens, trading the tokens, unboxing/unpacking the tokens, crafting using the tokens, redeeming the tokens, and the like; a configuring user may define a set of smart contracts that perform different functions including at least a minting smart contract, an unboxing smart contract, and a crafting smart contract; the minting smart contract may be configured to generate NFTs that represent instances of respective types of packs and NFTs that represent instances of respective types of cards; FIG. 33 illustrates an example assets storage smart contract that stores a first NFT collection and a second NFT collection; exemplary details of some tokens of the first NFT collection, including collectible tokens and digital packs are also shown in tabular format, although this is merely illustrative and the token data may be stored using any appropriate data structure; the collectible tokens and/or digital packs may be embodied as non-fungible tokens; for example, each of example collectible tokens may have minted using a first template (“Template A”), and thus (in this example) may share the same name (“NFT A”); when the token is assigned to a user’s account, the token may appear in a digital wallet associated with that user; FIG. 33 further shows example digital pack NFTs, which may be exchanged for collectible token NFTs using an unboxing mechanism; as will be discussed in further detail below, to use the unboxing mechanism, a user device may transfer the token to an unboxing smart contract, which may then be designated as the owner of the account, as shown for the example digital pack; the digital pack NFTs may include data fields that are distinct or contain different data compared to the collectible token NFTs, such as a data field containing a link to an unboxing smart contract, a data field indicating how many tokens will be received upon unboxing, and/or the like); generate, utilizing a collection classifier, a collection dataset as a function of the plurality of data (Paragraphs 839-0841, 0844, and 0846 teach FIG. 40 shows an example data flow for using configuration data to configure the various smart contracts used to implement a token collection and associated functionality; the configuration data may be generated by developer devices, which may create the configuration data using manual input from a developer, using one or more automated tools, and/or the like, and then transmit the configuration data to the tokenization platform; the configurator subsystem may use the configuration functions of the minting smart contract to store the templates and/or schemas as template data and/or schema data in the minting smart contract; similarly, the configurator subsystem may use the configuration functions of the unboxing smart contract to store the unboxing recipes in the unboxing smart contract, and may use the configuration functions of the crafting smart contract to store the crafting recipes in the crafting smart contract; the configurator subsystem may further use sales data to configure the sales smart contract; the sales data may define prices that may be paid for specific tokens (e.g., prices of different packs of digital trading cards and/or collectible tokens) and may configure the sales smart contract to manage the transfer of tokens from one user to another (e.g., including updating the assigned owner in the asset storage smart contract), and/or the like; the sales data may configure the sales smart contract to call the minting smart contract to mint a token when the token is sold; the configurator subsystem may use the configuration data corresponding to a collection to configure and parameterize a pre-existing set of smart contracts; the smart contracts may be and/or contain parameterizable smart contract templates that are reused for multiple collections, providing functionality for each according to the configuration data received for each corresponding collection; additionally or alternatively, in response to receiving configuration data, the configurator subsystem may deploy new smart contracts to the distributed ledgers, then configure the new smart contracts using the configuration data; in response to receiving configuration data, the configurator subsystem may parameterize smart contracts, then deploy the parameterized smart contracts to the distributed ledgers); generate a command certificate comprising a plurality of collection actions related to the management of the plurality of data (Paragraph 0845, 0875-0876, 0910-0912, and 0834 teach the configurator subsystem may further include user interface data, which may be used by a website or other user interface accessed by user devices to interact with the token collection; the user interface data may include data that indicates how a website should display tokens, configures the website to allow users to invoke crafting or unboxing mechanics, and the like; FIG. 45 illustrates a log of example distributed ledger transaction data that may be used to carry out the methods of FIG. 43 and/or 44; a first example transaction may include a first action for receiving a digital pack token, a second action for invoking an unbox function of an unboxing smart contract, a third action for boosting an account balance, a fifth action for burning the digital pack, and a fifth action for requesting a random number from a random number generator smart contract; then, a second example transaction may include a first action for receiving the requested random number, and a second action for logging the unboxing; then, a third example transaction may include a first action for claiming a collectible token, and a second action for minting the collectible token; in the first action, the unboxing smart contract may receive a digital pack token with a unique identifier (shown as an “asset _ids” value) from a user account (here, an account named “tlxfu.wam”), as described for; in the second action, the unbox function may be invoked using arguments that specify a collection identifier, a box identifier (which may be the same as a template identifier), and the owner account; a user interface for browsing a user’s digital wallet, which may contain various collectible tokens, digital packs, and any other tokens (e.g., fungible tokens) associated with a user account; the tokenization platform may be configured to generate the user interface for access by the user device; the user device may communicate with a user’s digital wallet in order to obtain data about collectible tokens, digital packs, and/or other tokens owned by the user; the user device may communicate with node devices in order to receive data from the asset storage smart contract indicating which tokens are associated with a user account and various data associated with each token; then, using this data, the example user interface may display the tokens, which may entail display of any media assets associated with the token (e.g., by following IPFS links), display of a name of the token, display of a mint number of the token, and the like; the example user interface may include a function for selecting a particular digital pack and then selecting an unbox option to cause unboxing of the digital pack; for example, a user may select an “Unbox” button, which may cause the user device to communicate with the node devices to generate one or more transactions on the distributed ledgers; for example, the user device may cause a first transaction that transmits the digital pack to an unboxing smart contract, invokes an unbox function of the unboxing smart contract, boosts a user account, burns the digital pack token, and requests a random number from a random number generator smart contract, as discussed above with respect to FIG. 45; the crafting process may then proceed as discussed above with respect to FIGS. 43-45; FIG. 36 illustrates an example user interface for viewing an example schema that may be used for a digital pack; the tokenization platform and/or mystery box system may generate the user interface, as well as other user interfaces for viewing schema data, template data, NFT data, and the like; such user interfaces may be accessed by various user devices and/or developer devices; the schema is for a CAPCOM digital pack that may be used to generate STREET FIGHTER trading cards; the schema may be associated with a collection identifier corresponding to a STREET FIGHTER collection and may define one or more attributes; the user interface may display information about the collection; the attributes of the schema may include a name attribute specified as a string, an image attribute specified as an image, a series attribute specified as a string, a contains attribute specified as a string, a URL of an unpacking smart contract specified as a string, a description specified as a string; these attributes may correspond to, for example, the token name, media assets, token series ID, token container data, token smart contract, and token description respectively, as illustrated at FIG. 32); receive a user input comprising at least a selection of a collection action of the plurality of collection actions (Paragraphs 0856-857 teach FIG. 43 illustrates an example method that may be executed by a first example unboxing smart contract and/or tokenization platform; the unboxing smart contract is implicated when a user requests to redeem a pack of cards that is owned by the user; for example, a user may select a pack from his or her digital wallet and may click on a GUI element to request the unboxing of the pack; the unboxing smart contract receives a request to unbox a digital pack, which may be a mystery box; the request may include a unique identifier of the instance of the pack (e.g., the NFT ID, the minting number, and/or the like); the request may further indicate an account of the redeeming user); and perform the at least a selection of a collection action of the plurality of collection actions (Paragraphs 0857-0858 teaches upon receiving the request, the unboxing smart contract may unbox the digital pack according to a set of rules, instructions, commands and the like based on an unboxing recipe defined in the unboxing smart contract, and may burn the NFT representing the specified digital asset (e.g., pack), such that the user cannot trade, sell, or try to re-redeem the unboxed pack; the unboxing smart contract determines a set of tokens to award the owner of the digital pack based on the unboxing rules corresponding to the digital pack; the attributes of the digital pack define the quantity of digital assets to award the unboxing user, and the unboxing rules define weights/probabilities of awarding respective types of digital asset (e.g., a specific type of card) to the user; if the digital pack is redeemable for ten token-based trading cards, the unboxing smart contract may randomly determine a characteristic, type, or some other aspect for each of the ten token-based cards in the digital pack at unboxing time in accordance with the unboxing rules). However, Quigley does not explicitly teach rights data comprising at least a smart assessment datum, wherein the rights data comprises a user protocol, and wherein the user protocol comprises a plurality of authorization credentials requirements comprising a party category classification requirement. Goldston from the same or similar field of endeavor teaches rights data comprising at least a smart assessment datum, wherein the rights data comprises a user protocol, and wherein the user protocol comprises a plurality of authorization credentials requirements comprising a party category classification requirement (Paragraphs 0156-0158 and 0165 teach the system may also be configured to ensure that appropriate permissions from other owners (e.g., creators, contributors or otherwise) is obtained before any content can be shared in contemplation of an NFT transaction; the system can check appropriate permissions to determine whether the requested transfer or NFT transaction is permitted; examples of a designated recipient may include, a producer, creator, manager, record label, publisher, a potential or actual purchaser via an NFT transaction, and so on; assuming the users are authorized and the transfer permitted, the system transfers the container to the designated recipient; upon authorizing a transfer, a sender may specify different levels of permission for different recipients; certain recipients may have set levels of permissions that identify parameters such as access types (review, modify, etc.), number of times access is permitted, durations or time windows in which access is permitted, further sharing rights, and so on; these different levels of permission can apply to recipients 611 who are purchasers or potential purchasers of rights via an NFT transaction; upon authorizing a transfer, a sender may specify different levels of permission for different recipients). It would have been prima facie obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to have modified Quigley to incorporate the teachings of Goldston for rights data to comprise at least a smart assessment datum, wherein the rights data comprises a user protocol, and wherein the user protocol comprises a plurality of authorization credentials requirements comprising a party category classification requirement. There is motivation to combine Goldston into Quigley because a UI can allow a custodian of the container (e.g., the author(s) of the content, publisher, recipient with designated permissions, or other recognized user) to provide an updated content file, access and play back the content file, modify the content file, add or remove the associated files or other metadata and otherwise edit the associated files or other metadata (Goldston Paragraph 0057). Controls may be implemented to allow only owners or administrators to make changes to the container items to help maintain integrity of the data. A mechanism can be provided such that if a user does not have permission to make changes, they can enter their suggested changes into the system. The system can then send a notification of the change request to the proper users who will view and approve or disapprove the requested changes. Access and modification activities may be tracked along with notifications being sent (e.g., based on notification settings). Where multiple approvals are required, approval may be based on various rules such as majority required, unanimous consent required, and so on (Goldston Paragraph 0068). However, the combination of Quigley and Goldston does not explicitly teach generating the collection dataset comprises: receiving collection classifier training data, wherein the collection classifier training data comprises user data inputs, rights data inputs, and record data inputs correlated to historic collection data; receiving user feedback for the collection dataset, the user feedback comprising rearrangement data for the collection dataset received by a user interface; storing the user feedback as a portion of the historic collection data; and iteratively training the collection classifier as a function of the collection classifier training data, wherein training the collection classifier comprises: adjusting connections and weights between nodes in adjacent layers of the collection classifier. De Leon from same or similar field of endeavor teaches generating the collection dataset comprises: receiving collection classifier training data, wherein the collection classifier training data comprises user data inputs, rights data inputs, and record data inputs correlated to historic collection data (Paragraphs 0019-0020, 0043, 0058, and 0062 teach to use the payment processing service, user may provide authentication credentials associated with an account of user with the service provider; the authentication credentials may be stored along with other relevant account information, e.g., a transaction history associated with user's account, in a database coupled to or otherwise accessible to server; the server may also be used by the service provider to implement an auditing tool for user feedback data associated with the payment processing service; such user communications may be received by the service provider via, for example, at least one communications interface of server; the user feedback may include, for example, user complaints or reports of issues related to one or more service features; the service provider server may access and retrieve the user feedback data including any text and audio communications from the database for preprocessing by data preprocessor; the data preprocessor may perform various data conversion or preprocessing operations on the user feedback data to prepare the data for processing by the components of ML-based data classifier and use by ML models thereof; user feedback auditor may be used to analyze and classify the user communications received by the service provider through various communication channels; each transaction claim may include a user's comment (e.g., text-based input, text-based electronic data) or reason for submitting the claim/dispute; the process may begin by accessing user feedback data corresponding to user communications (e.g., email, text, or phone calls) received via at least one customer service interface of a service provider; the user feedback data may include, for example, a first set of feedback categories associated with a first classification of the user communications); receiving user feedback for the collection dataset (Paragraph 0048, 0050, and 0065-0066 teach a semantic analyzer may then perform a semantic search for additional words/phrases in the user communications that are semantically equivalent to some predetermined number of the most frequently used words/phrases; the semantic search may be performed using, for example, a machine learning framework (e.g., a deep neural network), which may be used to rank and sort individual words/phrases in the user communications according to their cosine similarity with the most frequent words/phrases; the results of the semantic search may be used to generate a second feature representation of the user communications from the first representation produced by feature extractor; the second feature representation may include a second set of textual data features extracted from the user communications, which include semantic equivalents of the first set of textual data features in the first representation; the second set of feedback categories may include, for example, additional categories that may be different from the first set of (predefined) feedback categories previously used to classify the user communications (e.g., based on the first classification with categories selected by customer service agents of the service provider, as described above); the ML engine may then be used to reclassify the user communications to generate a second classification of the user communications according to the second set of feedback categories that correspond to the plurality of clusters generated by clustering engine; the ML engine may determine an intent of each of the user communications from extracted features associated with each of the interactions using the machine learning-trained classifier and classify each of the communications as corresponding to a respective category of the second set of feedback categories based at least in part on the intent of that communication; a second machine learning model is trained using the second feature representation generated from the first machine learning model; each of the plurality of clusters may correspond to a different one of the feedback categories in the second set based on a unique pattern of the second set of textual features within the user communications; the trained second machine learning model may be used to generate a second classification of the user communications according to a second set of feedback categories); storing the user feedback as a portion of the historic collection data (Paragraphs 0025-0026 teach given the ubiquity of online devices and the wide array of feedback data available to the service provider, the disclosed feedback auditing techniques may be used to analyze and classify (or reclassify) user feedback data; the results of this classification may provide useful insights for the service provider and any affiliated entities (e.g., merchants) to improve existing features of their respective products or services and/or predict market trends for adding new product/service features tailored to different user groups and markets; the server may store the user feedback data, including the set of feedback categories, with other account information for user, e.g., within database, for later access and analysis by the user feedback auditing tool of the service provider; the analysis performed by the auditing tool may include applying one or more machine learning models to the user feedback data to reclassify the corresponding user communications according to a second set of feedback categories that improves upon the initial classification); and iteratively training the collection classifier as a function of the collection classifier training data, wherein training the collection classifier comprises: adjusting connections and weights between nodes in adjacent layers of the collection classifier (Paragraphs 0050-0051 and 0054 teach the ML engine train each ML model in the plurality of machine learning models to reclassify the user communications based on the second feature representation; the ML engine may generate multiple ML models that are based on or correspond to the second set of feedback categories and be adapted to train each of the ML models with respective training datasets to form different ML-trained classifiers; the respective training datasets may facilitate, for example, supervised learning by including labeled interaction data indicating what information in the user communications pertains to which of the feedback categories; when generating each ML-trained classifier, the features in the training datasets may be used to generate different layers of the ML model used for the classification, which may include different nodes, values, weights, and the like; the ML engine may utilize a supervised machine learning algorithm, function, or technique that utilizes continuous and/or iterative learning to generate the model; when training each ML model, the ML engine may utilize feedback and annotations or labeling from an agent device or the device of a customer agent supervisor (not shown) to iteratively train the model; for example, user communications in the training data set and/or other data sets may be flagged using the machine learning technique to identify different categories of relevant feedback data, where the supervisor's device may send an indication that the flagged communications were previously misclassified; the identification of such misclassified communications may be used to retrain the ML model in a continuous or iterative training process so that incorrect classifications may be reduced and/or eliminated, and the ML model may more accurately classify user communications). It would have been prima facie obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Quigley and Goldston to incorporate the teachings of De Leon to generate the collection dataset comprises: receiving collection classifier training data, wherein the collection classifier training data comprises user data inputs, rights data inputs, and record data inputs correlated to historic collection data; receive user feedback for the collection dataset, the user feedback comprising rearrangement data for the collection dataset received by a user interface; store the user feedback as a portion of the historic collection data; and iteratively train the collection classifier as a function of the collection classifier training data, wherein training the collection classifier comprises: adjusting connections and weights between nodes in adjacent layers of the collection classifier. There is motivation to combine De Leon into the combination of Quigley and Goldston because the service provider may use the reclassified user feedback data to gain valuable insight into the underlying issues that concern its users and make any necessary changes or improvements to its product(s) or service(s). The auditing and reclassification of the user feedback data may also be used by the service provider to improve its customer service interface, e.g., by training its customer service agents to produce more accurate classifications or by revising the predefined categories to include additional categories that are more specific to the types of issues reported by its customers (De Leon Paragraph 0014). However, the combination of Quigley, Goldston, and De Leon does not explicitly teach receiving user feedback for the collection dataset, wherein the user feedback comprises ratings on the collection dataset and instructions for reclassifying the collection dataset received by a user interface; generating rearrangement data, wherein generating the rearrangement data comprises a click and drag widget configured to re-categorize the plurality of data in the collection dataset. Dawson from same or similar field of endeavor teaches receiving user feedback for the collection dataset, wherein the user feedback comprises ratings on the collection dataset and instructions for reclassifying the collection dataset received by a user interface (Paragraphs 0017-0020 teach an example operation for updating an avatar's behavior rating; a user of the first avatar indicates a desire to submit a good rating for the second avatar by clicking a rate interaction button; the opportunity to rate the behavior of an avatar in the virtual universe is not limited to interactions between avatars; the user indicates a desire to submit a bad rating for the avatar by clicking a report behavior button; an indicated behavior score based on a behavior scale and comments associated with the behavior score are determined; a user may indicate a behavior score by clicking a radio button in a scale, typing a number within a range into a text box, etc.; a user may or may not submit comments along with the behavior score when rating an avatar; the avatar's behavior rating is updated with the behavior score and comments associated with the behavior score are saved); generating rearrangement data, wherein generating the rearrangement data comprises a click and drag widget configured to re-categorize the plurality of data in the collection dataset (Paragraphs 0021 and 0023 teach the behaviors of a subset of avatars in the virtual universe are analyzed; if an avatar's behavior rating decreases below a minimum acceptable behavior rating threshold (e.g., the avatar's behavior rating drops below 2 on a 5 point behavior scale), the avatar is subject to surveillance within a virtual universe (i.e., recategorized); the avatar's behavior rating is updated with the behavior score; the avatars behavior rating may be a sum of positive and negative scores, an arithmetic average, a weighted average, etc.; in some embodiments, comments may be left indicating the specific behavior that lead to the behavior score being included in the avatar's behavior rating). It would have been prima facie obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to have modified the combination of Quigley, Goldston, and De Leon to incorporate the teachings of Dawson to receive user feedback for the collection dataset, wherein the user feedback comprises ratings on the collection dataset and instructions for reclassifying the collection dataset received by a user interface; and generate rearrangement data, wherein generating the rearrangement data comprises a click and drag widget configured to re-categorize the plurality of data in the collection dataset. There is motivation to combine Dawson into the combination of Quigley, Goldston, and De Leon because behavior ratings allow a virtual universe to reward good virtual universe citizens and punish bad virtual universe citizens. Avatars that engage in good behaviors and maintain high behavior ratings may be rewarded with incentives in the virtual universe. These incentives may include access to bonus areas in the virtual universe, access to bonus content (e.g., eBooks, music downloads, etc), citizenship awards, etc. Avatars that engage in bad behaviors and have low behavior ratings may be subject to punishment in the virtual universe. The severity of punishment depends on certain thresholds and can range from restriction from access to certain areas in the virtual universe, restriction from owning certain items, suspension or termination of the avatar's account, observation by a virtual universe administrator, etc. (Dawson Paragraph 0024). Regarding Claim 1, Quigley teaches an apparatus for generating a collection dataset, the apparatus comprising at least a processor; and a memory communicatively connected to the at least processor, the memory containing instructions configuring the at least processor to perform operations (Paragraphs 1155, 1160, 1164, and 1168 teaches a special-purpose system includes hardware and/or software and may be described in terms of an apparatus, a method, or a computer-readable medium; the term hardware encompasses components such as processing hardware, storage hardware, networking hardware, and other general-purpose and special-purpose components; examples of processing hardware include a central processing unit (CPU), a graphics processing unit (GPU), an approximate computing processor, a quantum computing processor, a parallel computing processor, etc.; storage hardware is or includes a computer-readable medium that only excludes transitory electrical or electromagnetic signals propagating through a medium (such as on a carrier wave); a computer-readable medium in this disclosure is therefore non-transitory, and may also be considered to be tangible). Regarding Claim 11, Quigley teaches a method for generating a collection dataset (Paragraphs 0856 and 0861 teach FIG. 43 illustrates an example method that may be executed by a first example unboxing smart contract 3126 and/or tokenization platform; FIG. 44 illustrates a second example method 4400 that may be executed by an unboxing smart contract). Regarding Claims 2 and 12, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein the user data further comprises virtual content (Paragraph 0761 teaches the tokenization platform and/or mystery box system may provide functionalities for automatically generating and deploying digital tokens that may be used as collectible tokens for trading, gaming, crafting, and the like; the digital collectible tokens may be “minted” (e.g., generated and stored on blockchains or other distributed ledgers) programmatically such that they have various features/attributes that enable them to function as collectibles; for example, the tokenization platform may mint a collection of baseball player trading cards, where each card corresponds to a particular professional baseball player with corresponding attributes; although each token may be unique (e.g., the token may be an NFT), it may follow a certain template, such that there may be multiple tokens corresponding to the same player, character, item, etc.). Regarding Claims 3 and 13, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein the rights data further comprises a user protocol (Paragraphs 0858-0860 teach the unboxing smart contract determines a set of tokens to award the owner of the digital pack based on the unboxing rules corresponding to the digital pack; the attributes of the digital pack define the quantity of digital assets to award the unboxing user, and the unboxing rules define weights/probabilities of awarding respective types of digital asset (e.g., a specific type of card) to the user; if the digital pack is redeemable for ten token-based trading cards, the unboxing smart contract may randomly determine a characteristic, type, or some other aspect for each of the ten token-based cards in the digital pack at unboxing time in accordance with the unboxing rules; the cards that are generated by the unboxing rules in connection with an unboxing may be all “build level” (or level 0) cards, which are the lowest tier of cards. In some embodiments, the unboxing rules define the probability of each type of card that can be awarded during an unboxing; if there are ten different characters that may appear on a build level card, then the unboxing rules may define ten probabilities; the unboxing smart contract and associated unboxing rules may mint one or more NFTs, as described herein, based on the type(s) of cards(s) determined by the unboxing smart contract; a new NFT may be minted for each a new card, corresponding to the type of the card and/or some characteristic of a card, such as a sub-variant of a given card type; the unboxing smart contract and associated unboxing rules may send a request to generate a new NFT to the minting smart contract, whereby the request indicates an account ID and a template ID or a set of attributes of the card to be minted; the minting smart contract mints the new NFT representing the card and assigns the new NFT to the account of the user; the unboxing contract may operate in this manner for each card in the pack; the unboxing smart contract may select a particular pre-minted asset from a set of available assets to award the user instead of minting a new digital asset, as is discussed elsewhere in the disclosure; the unboxing smart contract burns the NFT representing the digital pack; the unboxing smart contract (or another smart contract) may update the ownership data of the NFT representing the digital pack to NULL or to an inaccessible account, such that the digital pack cannot be traded, sold, or redeemed again). Regarding Claims 4 and 14, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein the rights data further comprises an informative assessment datum (Paragraph 0795 teaches a template may further include, for each data attribute, an indication of whether the data attribute is mutable or immutable; indications of which data attributes are mutable, and which are immutable, may be stored as a separate data attribute; the template may further define whether additional attributes may be added, whether the added attributes may be mutable or immutable, which parties have permission to add the attributes, and the like). Regarding Claims 5 and 15, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein the record data further comprises a transaction history (Paragraph 0489 teaches the distributed ledger may store any suitable data relating to an item, a user, a seller, and the like; the distributed ledger may store item-related data such as transaction history). Regarding Claims 7 and 17, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 4 and 14 above; and Quigley further teaches wherein the memory contains instructions further configuring the at least a processor to populate the historic collection data by a web crawler (Paragraphs 0485 and 0836-0837 teach the buyer marketplace system may depict items as individual thumbnail images; a simple box style user interface element can be added to the Item detail pages to display the attributes of an item, including an item description attribute, item notes attributes, and a seller URL attribute; an item description field on the GUI can support clickable URLs that can redirect platform users to pages with more information about the product or other relevant pages; FIG. 38 shows an example user interface depicting two different digital packs; the example user interface may be displayed on a user device, for example, if a user is viewing digital packs owned by the user that are in the user’s digital wallet; the example user interface may also be displayed on a developer device, for example, if a developer is interacting with the mystery box system to view or configure digital packs for a token collection; a first digital pack may be unboxed to obtain 10 STREET FIGHTER collectible tokens, whereas a second digital pack may be unboxed to obtain 60 STREET FIGHTER collectible tokens, as shown in FIG. 38; the example digital packs shown in FIG. 38 may each correspond to different templates that share a common schema; as shown at FIG. 36, an example schema may specify a name attribute, image attribute, series attribute, contains attribute, unpack_url attribute, and description attribute. Here, the first digital pack 3144 may have a first name value (e.g., “Normal Pack”), a link to a first image (as shown in FIG. 38 ), a series value (e.g., “Series 1”), a first contains value (e.g., “10” indicating that the pack contains 10 cards), an unpack URL referring to an unpacking smart contract, and a first description value (e.g., “10 Digital Cards”); each of these data values may be specified by a first template corresponding to the first digital pack; the second digital pack may contain a different name (e.g., “Ultimate Pack”), a different link to a different image (as shown in FIG. 38 ), the same series value (e.g., “Series 1”), a different contains value (e.g., “60” indicating that the pack contains 60 cards), the same unpack URL referring to the same unpacking smart contract, and a different description value (e.g., “60 Digital Cards”)). Regarding Claims 8 and 18, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein generating the command certificate comprises utilizing a certificate classifier trained using command certificate training data comprising market data and a protocol library (Paragraphs 0913-0914 teach analytics data may be provided at several levels; at a highest level, the analytics and reporting system may provide analytics data about token collections as a whole; for example, analytics and reporting system may generate analytics data comprising a total issued number of tokens for a given collection, an average price for all collection tokens sold via a marketplace, a total available supply, a total issued supply, or total maximum supply for all tokens, a total number of VIRL tokens redeemed, a total number of real-world items available to be redeemed, and the like; the analytics and reporting system may further generate one or more popularity factors that may measure the amount of activity for a particular collection, including minting activity, sales activity, unboxing activity, crafting activity, redemption activity, and the like; at a slightly lower level, the analytics and reporting system may generate analytics data corresponding to various types of tokens, such as various schemas, templates, attributes, or groups of attributes; for example, the analytics and reporting system may monitor which types of tokens are most popular according to the various popularity factors mentioned above, which types cause a token to increase or decrease in value on a marketplace, average prices for various types, supply information for various types (e.g., available supply, issued supply, maximum supply), and the like; furthermore, the analytics and reporting system may keep logs of changes in value, popularity, supply, or other measurements for different types of tokens over time). Regarding Claims 9 and 19, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein a collection action of the plurality of collection actions comprises a transactional option (Paragraph 0788 teaches a configuring user may define a set of smart contracts that perform different functions; the set of smart contracts may include at least a minting smart contract, an unboxing smart contract, and a crafting smart contract). Regarding Claims 10 and 20, the combination of Quigley, Goldston, De Leon, and Dawson teaches all the limitations of claims 1 and 11 above; and Quigley further teaches wherein the memory contains instructions further configuring the at least a processor to post the plurality of data on an immutable sequential listing, wherein posting comprises posting a cryptographic accumulator (Paragraphs 0422-0423 teach the distributed ledger transaction systems and methods described herein utilize distributed ledger technology (e.g., blockchain technology) in combination with smart contracts to allow users to negotiate, document, and/or execute a variety of different transactions; the different transactions include securitized decentralized loan transactions; these loan transactions include loan transactions that are secured by traditional types of collateral and/or by digital assets; distributed ledger technology forms the basis for cryptocurrencies that are rapidly expanding in application and adoption; such cryptocurrencies augment or replace existing payment methodologies such as cash, but also provide a decentralized system for processing transfers of the cryptocurrency; the basis for the distributed ledger/blockchain technology is a linked list of data blocks; each block contains a link to the prior block in the chain and encrypted data.; the encrypted data may include transaction data documenting the exchange of a digital currency, software such as an executable digital contract, and data associated with the use of a digital contract by specific parties, although it may also include other types of data as described in further detail below; the data in each block in the distributed ledger includes a hash of the previous block in the chain as a means of identifying and preventing attempts to modify prior blocks in the distributed ledger). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Gal et al. (US 20140335497) Paragraph 0330 teaches the “planning like” activities that a teacher may perform in real time, while the lesson is in progress, may include, for example, changing or moving or dragging the Stop Line to a new location; dragging a students' name/avatar/icon from one group to another group (or otherwise modifying groups, or re-grouping students); writing a comment or “virtual sticky note” to himself with regard to particular student(s) or topics or content; or the like. Optionally, some implementations allow the teacher to perform in real-time while the lesson is in progress, or subsequently after the lesson has ended, one or more modifications and/or changes as described herein. Iyer et al. (US 20180232785) teaches a method and system for obtaining interactive user feedback in real-time by feedback obtaining system. The feedback obtaining system establishes connection between user device of user and server of service provider based on user location received from user device, receives static data of user from server and dynamic data of user from capturing device located at site of service provider, identify contextual information associated with user based on static data and dynamic data, provide one or more feedback queries for user from database based on contextual information, provide one or more sub-feedback queries for user based on response of user for one or more feedback queries and obtains user feedback based on response of user for one or more sub-feedback queries and one or more feedback queries and implicit feedback. The use of implicit feedback together with actual feedback gives effective feedback of users. Any inquiry concerning this communication or earlier communications from the examiner should be directed to COURTNEY JONES whose telephone number is (469)295-9137. The examiner can normally be reached on 7:30 am - 4:30 pm CST (M-Th). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Neha Patel can be reached at (571) 270-1492. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /COURTNEY P JONES/Primary Examiner, Art Unit 3699
Read full office action

Prosecution Timeline

Feb 22, 2023
Application Filed
Dec 18, 2023
Non-Final Rejection — §103
May 22, 2024
Examiner Interview Summary
May 22, 2024
Applicant Interview (Telephonic)
Jun 19, 2024
Response Filed
Jul 08, 2024
Final Rejection — §103
Jan 13, 2025
Request for Continued Examination
Jan 15, 2025
Response after Non-Final Action
Feb 27, 2025
Non-Final Rejection — §103
Aug 07, 2025
Applicant Interview (Telephonic)
Aug 07, 2025
Examiner Interview Summary
Sep 04, 2025
Response Filed
Sep 16, 2025
Final Rejection — §103
Jan 15, 2026
Request for Continued Examination
Feb 12, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103
Mar 04, 2026
Interview Requested
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 24, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597018
DECENTRALIZED IDENTITY-BASED COMMUNICATION SERVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12591894
FRAUD PREVENTION VIA BENEFICIARY ACCOUNT VALIDATION
2y 5m to grant Granted Mar 31, 2026
Patent 12586077
SYSTEMS AND METHODS FOR END TO END ENCRYPTION UTILIZING A COMMERCE PLATFORM FOR CARD NOT PRESENT TRANSACTIONS
2y 5m to grant Granted Mar 24, 2026
Patent 12579543
HIERARCHICAL DIGITAL ISSUANCE TOKENS AND CLAIM TOKENS
2y 5m to grant Granted Mar 17, 2026
Patent 12572936
QR CODE PAYOR TRACKING AND REPEAT PAYMENT PREVENTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
67%
Grant Probability
90%
With Interview (+23.3%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 235 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month