Prosecution Insights
Last updated: April 19, 2026
Application No. 16/038,047

SYSTEMS AND METHODS TO DISPLAY THREE DIMENSIONAL DIGITAL ASSETS IN AN ONLINE ENVIRONMENT BASED ON AN OBJECTIVE

Non-Final OA §101§103§112§DP
Filed
Jul 17, 2018
Examiner
SITTNER, MICHAEL J
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Trivver, Inc.
OA Round
10 (Non-Final)
11%
Grant Probability
At Risk
10-11
OA Rounds
4y 9m
To Grant
26%
With Interview

Examiner Intelligence

Grants only 11% of cases
11%
Career Allow Rate
42 granted / 381 resolved
-41.0% vs TC avg
Strong +15% interview lift
Without
With
+15.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
47 currently pending
Career history
428
Total Applications
across all art units

Statute-Specific Performance

§101
29.6%
-10.4% vs TC avg
§103
36.9%
-3.1% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
22.2%
-17.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 381 resolved cases

Office Action

§101 §103 §112 §DP
DETAILED ACTION Status of Claims The present application, filed on or after 3/16/2013, is being examined under the first inventor to file provisions of the AIA . This action is in reply to RCE, Remarks, and Amendments filed 10/27/2025. Claims 1, 8, and 15 are amended. Claims 1-20 have been examined and are pending. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/27/2025, has been entered. (AIA ) Examiner Note In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were effectively filed absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned at the time a later invention was effectively filed in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention Interpretation of Claim Language The phrase “placement of at least one 3D digital asset” has been interpreted, for purpose of application of the prior art, as discussed per claim interpretation of previous Non-Final OA (mailed 6/30/2020, pg. 3) as: “…the Examiner interprets this feature as a "2D rendering of 3D digital asset”; 3D assets as taught in the instant specification, only exist in applicant’s virtual world, not in a 3d world, and are displayed to a user only via a 2D monitor display.” Examiner notes that this interpretation has been confirmed by Applicant’s own Remarks (filed 9/25/2020, pgs. 7-8) stating the following: “Particularly, given the current state of technology and in light of the specification, every known "3D digital object" is a 2D rendition that creates an illusion to the viewer's eye to believe it to be a 3D object. In other words, a person having ordinary skill in the art understands and appreciates that a 3D object displayed on a graphical user interface will always be a 3D rendering displayed on a 2D interface (not an actual tangible "3D object”… Therefore, any reading of the specification that results in an interpretation of rendering an "actual" 3D digital object is not supported by the understanding of one having ordinary skill in the art…” Per phone conversation with Applicant 5/30/2025, as noted in Final Rejection (6/26/2025), Applicant asserts… the “client computer” is intended to be considered part of his claimed “system”. The term "programmatic function" is open to interpretation per the plain meaning of the term “function” as modified by the term “programmatic” which includes a combination of “scripts” (as apparently confirmed per applicant’s own arguments and characterizations of "programmatic function" per Appeal Brief, filed 4/25/2024, pgs. 9-11). The Examiner notes the entire Specification, indeed the entire original disclosure, is silent as to the term “programmatic function”. Examiner acknowledges the Applicant has argued (Appeal Brief, filed 4/25/2024, pg. 10-11) that “…the event triggers described above [i.e. Table 1 of the '679 Application] provide support for a ‘programmatic function’ that results in generation of engagement data related to interaction or viewability with the digital asset, as claimed, within the scope of the invention.” And per Applicant’s arguments filed 2/18/2023, the Applicant asserts: “a person having ordinary skill in the art would appreciate that to determine user interaction or viewability, programmatic function(s) like OnClicked(); GetScreenRealEstate(); islnView(); etc. can be used.” Regarding this later statement by the Applicant, the Examiner notes that Applicant’s original disclosure is silent in regards to a showing of how such supposed functions operate; only their name alludes to implied functionality but at this level of generality, this is nothing more than an empty shell or wish for a function unless Applicant means he is using off-the-shelf well-known functions which are publicly available. If this is the case, then Applicant should state on the record whether he is making use of a previously well-known function or whether the “programmatic function” including its asserted capabilities is supposedly his own invention. Regarding Applicant’s Specification, the Examiner notes the closest mention of anything resembling the now claimed “programmatic function” is at paragraph [0028], which notes: “…In yet another embodiment, smart object manager includes scripts [programmatic functions?] and determines SOMD [smart object metric data] associated with each smart object independently (that is, no information is transmitted from the smart object, but rather the smart object manager keeps track of the smart objects within the user's viewport or screen and collects data [engagement metric data] based on user action/interaction/viewability with each smart object…)” Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) or (for pre-AIA ) 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor, a joint inventor, or (for pre-AIA ) the applicant regards as the invention. Independent claims 1, 8, 15 have each been amended to recite limitations directed towards the following, exemplified1 in the features of claim 1 as provided below: PNG media_image1.png 256 730 media_image1.png Greyscale Respectfully, the aforementioned amendments fail to clearly identify which of the recited objects (i.e. the computer server or, the client computer, etc…) is intended to perform the action recited as: “and records the engagement metric data”; i.e. it is not clear whether applicant is attempting to claim his “computer server”, or “client computer machine”, or perhaps even “3D engine”, or “programmatic function of a 3D engine” is intended to perform this “record[ing] the engagement metric data”. This results in the claims being considered indefinite. The claim language is not clear in this regard and a person of ordinary skill in the art would not be apprised of the proper scope of subject matter which applicant is attempting to claim. Therefore, for at least this reason, the claims are held to be indefinite for failing to particularly point out and distinctly claim the subject matter which the inventors regard as the invention. Dependent claims 2-7, 9-14, and 16-20 inherit the deficiencies of their parent claim and are also rejected under 35 U.S.C. 112(b) or (for pre-AIA ) 35 U.S.C. 112, second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor, a joint inventor, or (for pre-AIA ) the applicant regards as the invention. The following is a quotation of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), first paragraph: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claims contain subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor(s), at the time the application was filed, had possession of the claimed invention. Claims 1, 8, 15 have been amended to recite in part the following: PNG media_image2.png 258 752 media_image2.png Greyscale The limitation is considered impermissible new matter. Examiner acknowledges the Applicant has previously argued (Appeal Brief, filed 4/25/2024, pg. 10-11) that “…the event triggers described above [i.e. Table 1 of the '679 Application] provide support for a ‘programmatic function’ that results in generation of engagement data related to interaction or viewability with the digital asset, as claimed, within the scope of the invention.” And per Applicant’s arguments filed 2/18/2023, the Applicant asserts: “a person having ordinary skill in the art would appreciate that to determine user interaction or viewability, programmatic function(s) like OnClicked(); GetScreenRealEstate(); islnView(); etc. can be used.” However, these passages do not illuminate what “computations” or “functions” are intended to encompass applicant’s now recited: “a programmatic function of a 3D engine, etc…”. Applicant’s Remarks (filed 10/27/2025, pg.1) asserts support for the amendment is found in the PNG media_image3.png 198 626 media_image3.png Greyscale Examiner disagrees. Applicant’s entire original disclosure (including co-pending applications) are completely silent in regards to a “programmatic function of a 3D engine” and silent in regards to a “a programmatic function” and silent in regards to “a 3D engine”. There is no support for the now claimed a programmatic function of a 3D engine which performs the functions now claimed. Furthermore, the Examiner finds that the only mention of “screen bounds” is in applicant’s Co-pending specification 15/209,679 at paragraph [0035], and this only mentions it as follows: “…a screen bounds function (or equivalent) can be used to obtain an approximation (percentage) of the screen that the digital smart object is covering.” However, “screen bounds function” does not appear to be the same as the recited “screen-bounds computations” which is recited as performing a different function – i.e. processes rendering data to detect user interaction or viewability of the at least one 3D digital asset. Obtaining a percentage of screen is different than merely detecting interaction or viewability. Furthermore, Examiner has noted that Remarks (filed 3/21/2025, bottom pg. 8) appeared to try to address this deficiency by stating the following: “Additionally, the use of bounding box computations is supported by __”; i.e. no support is actually identified by the Applicant. Although the Applicant has had ample opportunity to provide evidence, the Applicant has failed to do so. Examiner agrees! There is no support found in the specification, nor the entire original disclosure, for this newly claimed feature. Accordingly, the claims are improperly directed to impermissible new matter. Dependent claims 2-7, 9-14, 16-20 inherit the deficiencies of their parent claim and are also rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (i.e. a judicial exception) without significantly more. Per step 1 of the 2019 Revised Patent Subject Matter Eligibility Guidance, the claims are directed towards a process, machine, or manufacture. Per step 2A Prong One, the claims recite specific limitations which fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG, as follows: Per Independent claims 1, 8, 15: Analyzing the engagement metric data Based on the analyzing, determining whether the objective [e.g. budget threshold established by an advertiser per Spec at paragraph 0049] has been achieved by a user, wherein achievement of the objective is based on the engagement metric data [e.g. a click on an engagement feature per Spec at paragraphs 0038-0039]; generating a rendering control instruction to dynamically adjust the visibility parameters of the at least one 3D digital asset within the 3D engine for a predetermined time. As noted supra, these limitations fall within at least one of the groupings of abstract ideas enumerated in the 2019 PEG. Specifically, these limitations fall within the group Certain Methods Of Organizing Human Activity (e.g. fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). That is, the aforementioned steps as drafted, are simply business decisions to either display advertising or not display advertising for a predetermined time based on whether a marketing objective has been achieved. There is no technical problem to be solved and no technical solution presented for solving a technical problem. Furthermore, the mere nominal recitation of a “by a computer server” and “graphical user interface”, etc… does not take the claim limitation out of the enumerated grouping. Thus, the claims recite an abstract idea. Per step 2A Prong 2, the Examiner finds that the judicial exception is not integrated into a practical application. Although there are additional elements, other than those noted supra, recited in the claims, none of these additional element(s) or a combination of elements as recited in the claims apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that it is more than a drafting effort designed to monopolize the exception. As drafted, the claims as a whole merely describe how to generally “apply” the aforementioned concepts and “link” them to a field of use or serve as insignificant extra-solution activity. The claimed computer components are recited at a high level of generality and are merely invoked as tools to implement the idea but are not technical in nature. Simply implementing the abstract idea on or with generic computer components is not a practical application of the abstract idea. These additional limitations are as follows: PNG media_image4.png 398 726 media_image4.png Greyscale PNG media_image5.png 134 736 media_image5.png Greyscale However, these steps do not present a technical solution to a technical problem. These steps as noted merely serve to apply the idea via generic computers, link the idea to a field of use, or are insignificant extra-solution activity (e.g. data-gathering and data transmittal steps) – e.g. at this level of generality, Applicant’s invention is not a technical solution or technique for “receiving” an objective(s) stipulated by an advertiser nor “receiving” engagement metric data, regardless of the description use of a generic undisclosed “programmatic function” used to derive such data nor a new technique of transmitting a data signal regardless of the intended use of such signal. Note also applicant’s wherein clause describing “the programmatic function” is merely a description of collecting data when recited at this high level of generality. Note, Applicant has not invented a new or novel “programmatic function… using screen-bounds computations” but instead per Applicant’s own Remarks (e.g. Remarks, filed 3/21/2025, pg. 8-9) such a function appears to be acknowledged as well-known technology; note applicant states the following: “...Use of bounding box computations is a well-established technique for measuring user interaction and object visibility within 3D environments based on the disclosure. See Appln. No. 15/209,679, Table 1, paragraph 35.”; However, as neither Table 1 nor paragraph 35 of Appln. No. 15/209,679 mention “bounding box” or “bounding box computations”, Examiner can only conclude Applicant means his disclosure acknowledges that “bounding box computations” used to determine user interaction with objects is a well-known technology which need not be referenced here. Furthermore, applicant’s only mention of “screen bounds” is in his Co-pending specification 15209679 at paragraph [0035], and this only mentions it as follows: “…a screen bounds function (or equivalent) can be used to obtain an approximation (percentage) of the screen that the digital smart object is covering.” Therefore, this term “screen-bounds computations” is apparently generic and could encompass at least what is known as a “Screen capture” or a “bounding box” type computations, etc… In Indeed, Examiner acknowledges the prior art of Cunningham (U.S. 6,366,294 B1) teaches per at least [4:20-25]: “…The bounding box defines the smallest of a predetermined geometric shape that circumscribes the object and is used to determine the occurrence of user interaction with a zooming object…” and such a technique is not the applicant’s invention but is instead is a generic reference to a borrowed already known off-the-shelf set of generic ideas for gathering data being viewed or rendered. Therefore, the claims as a whole do not integrate the method of organizing human activity into a practical application. Further to Step 2A Prong 2, the dependent claims do recite additional limitations. However, the combination of elements when considered either as a whole or independently or in combination with the parent claims, do not integrate the identified method of organizing human activity into a practical application thereof. For example, dependent claims 2, 9, 16 each recite in part the following: “…wherein the programmatic function is a screen bounding function, and wherein each 3D digital asset can transmit engagement metric data based on user interaction or viewability with the 3D digital asset.” However, this appears to simply be a description of the context for implementation (e.g. a record/engagement metric, such as of a click or impression or field of view of an object [e.g. screen bounding] can be transmitted when a user interacts with the ad object) but this is not significantly more than the already recited abstract idea. Furthermore, note that the feature as recited, i.e. the description of what the ad (i.e. the 3D digital asset) may or may not do, is currently not functionally connected by the recited claim language nor to any of the already aforementioned steps – i.e. the “receiving”, “determining”, steps themselves. Furthermore, the additional description of the “programmatic function” does nothing to illuminate its actual operation or functional relationship between its output and input. Therefore, this description is not significantly more than the abstract and itself whether taken alone or in combination. As another example, dependent claims 3, 10, 17 each recite the following: “displaying another 3D digital asset on the client computer machine for the predetermined period of time..” However, this is merely part of the already identified abstract idea. For example, it is a business decision to place a new ad when the budget of the former ad has been exhausted and/or replace an old ad with a new ad for a predetermined period of time according to an advertiser’s objective, such as an advertiser’s desire to not annoy a user with the same ad over and over, at least for a period of time; these decisions are simply business decisions falling within the group Certain Methods Of Organizing Human Activity. Therefore, the Examiner does not find that these limitations integrate the abstract idea into a practical application thereof. Instead, these limitations, as a whole and in combination with the already recited claim elements of the parent claims, fail to integrate the method of organizing human activity into a practical application. A similar finding is found for the remaining dependent claims. Per Step 2B, the Examiner does not find that the claims provide an inventive concept, i.e., the claims do not recite additional element(s) or a combination of elements that amount to significantly more than the judicial exception recited in the claim. As discussed with respect to Step 2A Prong Two, the additional elements in the independent claims were considered as merely serving to “link” the idea to a field of use, or “apply” the idea via generic computing components, or insignificant extra-solution activity. For the same reason these elements are not sufficient to provide an inventive concept; i.e. the same analysis applies here in 2B. Mere instructions to apply an exception using a generic computer component and conventional data gathering cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. So, upon revaluating here in step 2B, these elements are determined to amount to no more than mere instructions to apply the exception using generic computer components (i.e. a server) and/or gather and transmit data which is well-understood, routine, conventional activity in the field; i.e. note the Symantec, TLI, and OIP Techs Court decisions cited in MPEP 2106.05(d)(ll) indicate that mere receipt or transmission of data over a network is a well-understood, routine, and conventional function when it is claimed in a merely generic manner (as it is here). For these reasons, the claims are not found to include additional elements that are sufficient to amount to significantly more than the judicial exception. Please see the 2019 Revised Patent Subject Matter Eligibility Guidance published in the Federal Register (84 FR 50) on January 7, 2019 (found at http://www.uspto.gov/patent/laws-and-regulations/examination-policy/examination-guidance-and-training-materials). Claim Rejections - 35 USC § 103 (AIA ) The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as obvious over Lamontagne et al. (U.S. 2014/0114742 A1; hereinafter, "Lamontagne") in view of Altberg et al. (U.S. 2009/0086958 A1; hereinafter, "Altberg") and Cunningham et al. (U.S. 6,366,294 B1; hereinafter, "Cunningham") Claims 1, 8, 15: (Currently Amended) Pertaining to claims 1, 8, 15 exemplified in the method steps of claim 1, Lamontagne teaches the following: …receiving, by a computer server, an objective related to displaying at least one 3D digital asset 2 (Lamontagne, see at least and Figs. 8 and 10a-c and [0057]-[0058], e.g.: “…For example, an advertiser may define, through the advertising dashboard 16, a bookmark event as a qualified user interaction [an objective related to displaying at least one 3D digital asset] in a reward campaign. In a game, a user can pick up a brand's soda can [the soda can is a 3D digital object and the ‘brand’ is a graphic texture asset see Lamontagne [0038] – i.e. applicant’s ‘3D digital asset’] and choose to bookmark the brand for viewing at a later time. When the bookmark interaction occurs, the BIND engine 40 can be notified and can compare an advertiser defined counter value against a bookmark qualified user interaction [the objective]…”; Note per [0046]: “…An advertiser user can specify via the advertiser dashboard 16 an alternate asset to be used on a smart object 24.3. The alternate asset can be another texture…”; Per [0037]-[0038] e.g. regarding: “…a parameter may define the appearance of a smart object associated with a graphic texture asset [digital asset] retrieved from the asset repository 50… a parameter may be modified whereby initiating a change to the smart object appearance associated with a graphic texture asset retrieved from the asset repository 50…”; Also, per [0047]-[0051], e.g.: “…An advertiser metric [objective] can be defined as a measurement of one or more qualified marketing metrics. A qualified marketing metric can be comprised one or more user interactions with smart objects and subsequent actions… An advertiser metric can be used to identify an advertiser's intended reach or objective [objective related to displaying at least one 3D digital asset] to a target audience….”; Note [0025]-[0028]: “…smart objects preferably appear as virtual products and objects… and can be associated with advertising of a particular brand… The advertising [digital asset] of the virtual products and objects can appear in many ways, for example as … 3D labels [a digital asset], logos [another digital asset], printed surfaces… 3D shaped icons, packaging labeling of a product, etc…”; and again per [0047]: “…The smart object 24.3 can be… a three-dimensional (3D) object…” PNG media_image6.png 794 644 media_image6.png Greyscale ; also note [0057]-[0058], e.g.: “…a target marketing objective of “Awareness" among males ages 21-36 [an objective], at the initialization [received at the beginning] of each game …The automatic relocation by optimization of branded smart object [3D digital asset of a digital object] through the life cycle of an ad campaign can help the advertiser to meet their marketing objective…”; Examiner notes that Applicant’s Specification at [0028] regarding his “smart objects” and “digital assets” reads on these teachings by Lamontagne of Lamontagne’s “smart objects” which can take on different “graphic texture asset(s)” [digital assets].); receiving, by the computer server, engagement metric data from the client computer machine (Lamontagne, see citations noted supra, e.g. at least Fig. 1A and [0066], teaching, e.g.: “The master data database 34 can receive data [e.g. engagement metric data – see below] from one or more smart objects 24.3 within the entertainment interface 23… The master data database 34 can store smart object events received from the smart object package manager 24, such as… the time an object is viewable [engagement metric data regarding viewability] on a screen, interaction time [engagement metric data regarding time of interaction with 3D digital asset], interaction type [more engagement metric data], time count from first appearance of object until user interaction with object, and other data related to the smart object 24.3….”), wherein the engagement metric data is generated on the client computer machine by execution of a programmatic function of a 3D engine that […] detect user interaction or viewability of the at least one 3D digital asset […], and records the engagement metric data based on the detected interaction or viewability (Lamontagne, see citations noted supra, e.g. per [0047]: “…The smart object 24.3 can monitor event data that is related to the object. For example, the smart object 24.3 can monitor events [detect interactions or viewability], such as but not limited to, click object [a type of detected user interaction], pick up object [another detected interaction], view object up close [a type of detected user interaction or viewability], time spent with the object, perform call to action on object…”; and per at least [0025]-[0028]: “…The smart objects can include or take the form of… advertisements in the digital environment for example by the use of virtual three-dimensional rendered digital objects [rendering data] that are branded [e.g. with a graphic texture asset – see Lamontagne [0038] and [0046]] in a virtual space. Exemplary screenshots of smart objects are shown with respect to FIGS. 10a to 10c… When the user interacts with the one or more smart objects, at least one of the smart objects can record the interaction [records the engagement metric data] and transmit data about the interaction to the system 10. The system 10 can then analyze the user interaction with the smart object to provide a measurement of advertising effectiveness …”, and per [0066], smart object events received from the smart object package manager 24, include: “…the time an object is viewable [recorded engagement metric data based on the detected interaction or viewability] on a screen, interaction time [more recorded engagement metric data regarding time of interaction with 3D digital asset], interaction type [more recorded engagement metric data], time count from first appearance of object until user interaction with object, and other data related to the smart object 24.3… The interaction type can be, for example, click, closer camera view, etc…”; See also at least [0121]-[0122] disclosing smart object 24.3 data may be stored locally on client’s computer prior to transmitting to the system 10 and where software modules are involved, for example but not limited to the BIND engine 40, event stream processor 36, rewards system 33, smart object package manager 24, entertainment interface 32, these software modules may be stored as program instructions [programmatic functions] or computer readable code, executable by the processor, on a non-transitory computer readable media; applicant’s “programmatic function” reads on Lamontagne’s program instructions and software components which are executed by his client’s PC [client computer machine] by which the aforementioned functionalities are performed.) analyzing the engagement metric data (Lamontagne, see citations noted supra, including again at least [0028], teaching: “…The event stream processor 36 can then analyze the user interaction [engagement metric data] with the one or more smart objects [3D digital assets] to determine the effectiveness of advertising…”); based on the analyzing, determining, by the computer server, whether the objective has been achieved by a user, wherein achievement of the objective is determined based the engagement metric data (Lamontagne, see citations noted supra, e.g. again at least [0010], [0028] and [0047]-[0051], teaching, e.g. “…A qualified marketing metric can be comprised one or more user interactions with smart objects and subsequent actions…This objective can be satisfied [achieved] if a user interacts with multiple brands soda cans, and then makes a choice of a preferred brand by picking up the soda can and sharing the product brand with a friend on a third party network such as social network…”; i.e. LaMontagne's computer makes a determination that user has interacted with and picked up a soda can smart object and makes a determination that the advertiser's objective regarding advertising is satisfied based on this observed user interaction metric with the soda can smart object. Also note LaMontagne at [0058]. LaMontagne’s Advertisers stipulate metrics/objectives and the system determines whether user interactions, e.g. viewing or bringing into closer view, with smart objects, such as 3D digital objects, meet/achieve such stipulated metrics/objectives; see again at least [0064]-[0066] and [0083] e.g.: “…determine if a qualified user interaction has been satisfied…”), generating, by the computer server, a rendering control instruction to dynamically adjust visibility parameters of the at least one 3D digital asset within the 3D engine […] (Lamontagne, see citations noted supra, again at least [0053]-[0059], e.g.: “…The blended results of (a), (b), and (c) can be algorithmically interpreted and a decision parameter [rending control instruction] can be generated [generating], based on an algorithm, that can be transmitted as a signal to the asset switch system 46 to initiate a change to any of the attributes of the smart objects 24.3…”; and per [0038]-[0041], teaching: “…For example, a time-based marketing campaign may set a numeric value parameter [a rending control instruction parameter] associated with the asset switch system 46 indicating the duration of a campaign… When the duration of a campaign has expired or the monetary budget has been reached by satisfying a qualified marketing metric [when it is determined that the objective has been achieved], a parameter may be modified [i.e. an instruction is sent to modify the rendering sate of the 3D digital asset] whereby initiating a change to the smart object appearance [i.e. to adjust a visibility parameter of an asset of the 3D digital object] associated with a graphic texture asset [the asset of the 3D digital object] retrieved from the asset repository 50…”); and transmitting, by the computer server, the rendering control instruction to the 3D engine, thereby modifying the rendering state of the at least one 3D digital asset, based on the engagement metric data, when it is determined that the objective has been achieved by the user (Lamontagne, see citations noted supra, including e.g. [0053]-[0059], teaching: “…a decision parameter [rendering control instruction] can be generated, based on an algorithm, that can be transmitted as a signal [transmitting the rending control instruction] to the asset switch system 46 to initiate a change to any of the attributes [modifying the rendering state of the 3D digital asset] of the smart objects 24.3…”; where again, as noted supra, e.g. [0038]-[0041]: “…When the duration of a campaign has expired or the monetary budget has been reached by satisfying [achieving an objective] a qualified marketing metric [based on engagement metric data], a parameter may be modified [i.e. an instruction is sent to modify the rendering sate of the 3D digital asset] whereby initiating a change to the smart object appearance [i.e. to adjust a visibility parameter of an asset of the 3D digital object] associated with a graphic texture asset [the asset of the 3D digital object] retrieved from the asset repository 50…”), Although Lamontagne teaches the aforementioned limitations, he may not explicitly teach that his change of display of his 3D digital asset is for a predetermined period of time; i.e. the difference between the teachings of the prior art and the claim limitation is only that Lamontagne may not explicitly teach that his asset switch system 46 performs its “change to any of the attributes [modifying the rendering state of the 3D digital asset] of the smart objects 24.3” (e.g. changing a particular brand image [asset] of a displayed 3D digital object such as a soda can), for “a predetermined period of time”. However, regarding this feature, Lamontagne in view of Altberg teaches this nuance as follows: a predetermined period of time (Altberg, see at least [0216], teaching e.g.: “the advertisement may be… paused by the system when the advertisement budget reaches a threshold for a current time period, etc.” and per [0228], teaches e.g.: “…the advertiser can manually pause the advertisement for a period of time [a predetermined period of time], or use the budget limit (427) to automate the management of the advertisement. For example, the advertiser can set up a daily budget limit (427) for the advertisement. When the daily budget limit is reached, the advertiser is not paying further advertisement fees for the remaining of the day until the next business day.”) Therefore, the Examiner finds that the limitation in question is merely applying known technique of Altberg (directed towards pausing an advertisement for a period of time, e.g. based on a daily budget limit to automate the management of the advertisement, e.g. such that an advertisement is paused until the next day after a daily budget limit is reached) which is applicable to the known advertising system/method of Lamontagne (who already teaches transmitting a signal to initiating a change to his smart object appearance, e.g. associated with a graphic texture asset [the at least one 3D digital asset] of a smart object when it is determined that an objective, e.g. a budget, has been achieved based on a user engagement with the digital asset, such as viewing the asset) to yield predictable results. Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention before the effective filing date of the claimed invention to apply the techniques of Altberg to the device/method of Lamontagne to generate, by Lamontagne’s system, his decision parameter [rendering control instruction] also for the purpose of adjust Lamontagne’s digital asset [visibility parameter] of his 3D digital object within his BIND engine [3D engine] also only for a period of time because Lamontagne and Altberg are analogous art in the same field of endeavor (at least G06Q30/02) and because according to MPEP 2143(I) (C) and/or (D), the use of known technique to improve a known device, methods, or products in the same way (or which is ready for improvement) is obvious. Although Lamontagne teaches the above features upon which the limitation recited below depends, and Lamontagne has been shown to teach “screenshots” [processing rendering data], e.g. as noted supra per e.g. [0025], and Lamontagne teaches he detects user interaction or viewability of at least one 3D digital asset, e.g. as noted supra per at e.g. [0025]-[0028] in view of [0038], [0046], [0066], e.g.: smart object events received from the smart object package manager 24, include: “…the time an object is viewable [engagement metric data regarding viewability] on a screen, interaction time [engagement metric data regarding time of interaction with 3D digital asset], interaction type [more engagement metric data], time count from first appearance of object until user interaction with object, and other data related to the smart object 24.3… The interaction type can be, for example, click, closer camera view…”, etc…Lamontagne may not explicitly teach that he processes his screenshot [rendering data] to detect user interaction or viewability of the his smart objects rendered with graphic texture assets [at least one 3D digital asset] using screen-bounds computations. However, regarding this feature, Lamontagne in view of Cunningham teaches the following nuance: …processes rendering data to detect user interaction or viewability of the at least one 3D digital asset using screen-bounds computations. (Cunningham, see at least [4:20-25], teaching: “…The bounding box defines the smallest of a predetermined geometric shape that circumscribes the object and is used to determine the occurrence of user interaction with a zooming object…”; Cunnigham’s bounding box circumscribes a rendered image and therefore he is processing rendering data to determine [detect] the occurrence of a user interaction with an object and he does so using a bounding box [using screen-bounds computations]) Therefore, the Examiner understands that the limitation in question is merely applying a known technique of Cunningham (directed towards a technique of processing rendering data to determine [detect] the occurrence of a user interaction with an object and he does so using a bounding box [using screen-bounds computations]) which is applicable to a known base device/method of Lamontagne (already directed towards determining user interaction, including viewability of smart objects such as 3D objects with digital graphic texture assets [digital assets]) to yield predictable results. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the techniques of Cunningham to the device/method of Lamontagne in order to perform the limitation as claimed because Cunningham is pertinent to the user interaction identification objective of Lamontagne and because according to MPEP 2143(I) (C) and/or (D), the use of known technique to improve a known device, methods, or products in the same way (or which is ready for improvement) is obvious. Claims 2, 9: (previously presented) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne teaches the following: …wherein the programmatic function is a screen bounding function (Lamontagne, see at least [0054]-[0066], and [0069], teaching e.g.: “…For example, a user sees an Oral-B® dental floss smart object and inspects the item and its label bringing the field of view [screen bounding] closer in the virtual environment…” and “…For example, a pattern or series of events with a smart object may define a consideration metric. A user sees a smart object soda can and inspects the item and its label bringing the field of view closer in the virtual environment…” and “…The interaction type can be, for example, click, closer [screen bounding] camera view…”; Lamontagne’s system/method recognizes the field of view [i.e. on the display screen] of the smart object and when the interaction indicates “a closer camera view” and the “field of view” is “closer”, etc…; applicant’s undisclosed “screen bounding function” reads on these teachings of Lamontagne. ), wherein each 3D digital asset can transmit engagement metric data based on user interaction or viewability with the 3D digital asset (Lamontagne, see at least [0043]-[0048] as noted supra). Claims 3, 10, 17: (Original) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne teaches the following: …displaying another 3D digital asset on the client computer machine for the predetermined period of time (Lamontagne, see at least [0038]-[0041]; applicant’s “another 3D digital asset” reads on Lamontagne’s changed smart object asset, e.g. a different brand of soda). Claims 4, 11, 18: (Original) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne, teaches the following: …wherein preventing display of the at least one 3D digital asset on the graphical user interface for a predetermined period of time occurs across a plurality of 3D environments or platforms on one or more graphical user interfaces associated with the user (Lamontagne, see at least [0120], teaching e.g.: “…As shown in FIG. 11, system 10 may be operative with a real world gaming console platforms 200 such as Sony Playstation®, PC or Apple Macintosh® based software, mobile based software, or gamer network. The smart object package manager 24 may be embedded in games across multiple platforms 200 to receive, transmit, or receive and transmit directed actions to the system 10, via the Internet 300 to entertainment interface…”). Claims 5, 12, 19: (Original) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne teaches the following: …wherein a plurality of objectives for the 3D digital asset can be provided across a plurality of 3D environments or platforms for a plurality of users (Lamontagne, again see citations noted supra, e.g. at least [0051] and [0120]). Claims 6, 13, 20: (Original) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne teaches the following: …wherein the engagement metric data includes information related to at least one of user interaction related to tapping, touching, moving, time spent with the 3D digital asset, viewing, requesting detailed description associated with the 3D digital asset (Lamontagne, see at least [0028], [0032], [0047], and [0051], e.g. click, pick-up object, view object up close, etc…). Claims 7, 14: (Original) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne teaches the following: …wherein the engagement metric data includes a user engagement score (Lamontagne, see citations noted supra, including at least [0058] in view of [0112], e.g.: “…The third party network metrics cycle dashboard 26 can provide an administrative console that allows viewing of user's engagement data associated with a third party network user and smart objects that has been processed by the BIND engine 40…”; where, as noted, such metric data includes “user can interact with multiple brands soda cans, and then make a choice of a preferred [score] brand by picking up the soda can and sharing the product brand with a friend on a third party network such as social network…”; applicant does not provide any particular measure for his score so as to distinguish from the concepts delineated by Lamontagne’s determination of a user preference for a brand which represents an engagement score for that brand; also see [0042]). Claims 16: (previously presented) Lamontagne/Altberg/Cunningham teaches the limitations upon which these claims depend. Furthermore, Lamontagne teaches the following: …wherein the programmatic function is a screen bounding function, and wherein each 3D digital asset can transmit engagement metric data based on user interaction or viewability with the 3D digital asset (see rejection for claims 2, 9), wherein the engagement metric data includes a user engagement score (see rejection for claims 7, 14). Response to Arguments Applicant amended claims 1, 8, and 15 on 10/27/2025. Applicant’s arguments (hereinafter “Remarks”) also filed 10/27/2025, have been fully considered but are moot in view of the new grounds of rejection necessitated by applicant’s amendments. Also note the following: Regarding the double patenting rejection, the Examiner acknowledges Applicant’s Terminal Disclaimer filed 9/13/2023. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL J SITTNER whose telephone number is (571)270-3984. The examiner can normally be reached M-F; ~9:30-6:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraf can be reached on (571) 270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Michael J Sittner/ Primary Examiner, Art Unit 3621 1 The main difference between the aforementioned limitation in question and the corresponding limitations of claims 8 and 15 is that claims 8 and 15 resolve to claiming “a processing system” receives the engagement metric data from the client computer. 2 Specification [0028]: “…3D digital smart objects (also referred to herein as smart objects), in one embodiment, can be used to generate generic content (banner/billboard, videos, and/or 3D assets) that can optionally be replaced with branded content from an advertiser within a variety of digital platforms…”
Read full office action

Prosecution Timeline

Jul 17, 2018
Application Filed
Jun 24, 2020
Non-Final Rejection — §101, §103, §112
Sep 25, 2020
Response Filed
Oct 09, 2020
Final Rejection — §101, §103, §112
Mar 13, 2021
Request for Continued Examination
Mar 15, 2021
Response after Non-Final Action
May 13, 2021
Non-Final Rejection — §101, §103, §112
Oct 28, 2021
Response Filed
Jan 29, 2022
Final Rejection — §101, §103, §112
Aug 03, 2022
Request for Continued Examination
Aug 09, 2022
Response after Non-Final Action
Dec 12, 2022
Non-Final Rejection — §101, §103, §112
Feb 18, 2023
Response after Non-Final Action
Feb 18, 2023
Notice of Allowance
Mar 15, 2023
Response after Non-Final Action
Jun 17, 2023
Non-Final Rejection — §101, §103, §112
Sep 13, 2023
Response after Non-Final Action
Sep 13, 2023
Notice of Allowance
Nov 08, 2023
Response after Non-Final Action
Apr 25, 2024
Response after Non-Final Action
Apr 29, 2024
Response after Non-Final Action
Jul 26, 2024
Response after Non-Final Action
Oct 31, 2024
Response after Non-Final Action
Dec 05, 2024
Non-Final Rejection — §101, §103, §112
Mar 21, 2025
Response Filed
Apr 07, 2025
Final Rejection — §101, §103, §112
Apr 11, 2025
Response after Non-Final Action
Jun 24, 2025
Final Rejection — §101, §103, §112
Oct 27, 2025
Request for Continued Examination
Nov 05, 2025
Response after Non-Final Action
Feb 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561735
INFORMATION PRESENTATION METHOD AND INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12469047
METHOD AND SYSTEM FOR DETECTING FRAUDULENT USER-CONTENT PROVIDER PAIRS
2y 5m to grant Granted Nov 11, 2025
Patent 12462227
DISPENSING SYSTEM
2y 5m to grant Granted Nov 04, 2025
Patent 12456135
Systems for Integrating Online Reviews with Point of Sale (POS) OR EPOS (Electronic Point of Sale) System
2y 5m to grant Granted Oct 28, 2025
Patent 12417752
COORDINATED MULTI-VIEW DISPLAY EXPERIENCES
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

10-11
Expected OA Rounds
11%
Grant Probability
26%
With Interview (+15.4%)
4y 9m
Median Time to Grant
High
PTA Risk
Based on 381 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month