Prosecution Insights
Last updated: April 19, 2026
Application No. 18/811,687

SYSTEM AND METHOD FOR TRAINING, TAGGING, RECOMMENDING, AND GENERATING DIGITAL CONTENT BASED ON BIOMETRIC DATA

Non-Final OA §101§103§DP
Filed
Aug 21, 2024
Examiner
FURTADO, WINSTON RAHUL
Art Unit
3687
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
StoryUp, Inc.
OA Round
1 (Non-Final)
19%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 19% of cases
19%
Career Allow Rate
28 granted / 145 resolved
-32.7% vs TC avg
Strong +26% interview lift
Without
With
+26.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
35 currently pending
Career history
180
Total Applications
across all art units

Statute-Specific Performance

§101
38.6%
-1.4% vs TC avg
§103
34.1%
-5.9% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
11.7%
-28.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 145 resolved cases

Office Action

§101 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The action is in reply to the application filed 2024 August 21. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement The information disclosure statements (IDS) were submitted on 11/22/2024 and 02/07/2025. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement has been considered by the examiner. Priority Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 119(e) as follows: The later-filed application must be an application for a patent for an invention which is also disclosed in the prior application (the parent or original nonprovisional application or provisional application). The disclosure of the invention in the parent application and in the later-filed application must be sufficient to comply with the requirements of 35 U.S.C. 112(a) or the first paragraph of pre-AIA 35 U.S.C. 112, except for the best mode requirement. See Transco Products, Inc. v. Performance Contracting, Inc., 38 F.3d 551, 32 USPQ2d 1077 (Fed. Cir. 1994). The disclosure of the prior-filed application, Application No. 63/520,989 fails to provide adequate support or enablement in the manner provided by 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph for one or more claims of this application. For claim 16 the prior-filed application does not provide support for “wherein the device is a television.” Examiner cannot find disclosure that the device is a television in the prior filed application. For claim 18 the prior-filed application does not provide support for “wherein the therapeutic digital content is stored in association with a predicted impact score.” Examiner cannot find disclosure that the therapeutic digital content is stored in association with a predicted impact score. Accordingly, claims 16 and 18-19 are not entitled to the benefit of the prior application. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp. Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-23 of US11101031B2 in view of Hill et al. (US20180190376A1). Although the claims at issue are not identical, they are not patentably distinct from each other because recite substantially similar limitations. This is a nonstatutory double patenting rejection. The table/chart below exhibits the similarity* between the independent claims where claims 1 and 11 of the current application is a narrower variation of the claims of the reference patent. * Similarities highlighted in BOLD App# 18/811,687 US11101031B2 Claim 1: collecting, via a biometric tracker, a first biometric data of the user, wherein the biometric tracker includes at least one sensing device; processing the first biometric data to establish a baseline dataset for the user; extracting at least one feature parameter from the digital content; collecting, via the biometric tracker, a second biometric data of the user while the user is exposed to the digital content; generating, via an iteratively trained training model, a model output based on the second biometric data and the at least one feature parameter; determining an impact score for the digital content based on the baseline dataset and the model output; and presenting, via a display device, the impact score to the user. Claim 1: measuring the user's initial biometric activity; creating a baseline dataset corresponding to the user's initial biometric activity; providing the user with a first content, wherein the first content exposes the user to at least one of a first virtual reality environment, a first augmented reality environment, and/or a first mixed reality environment; measuring the user's biometric activity during and/or after the user's exposure to the first content; creating a first biometric dataset corresponding to the user's biometric activity resulting from the user's exposure to the first content; comparing the first biometric dataset with the baseline dataset; providing the user with a second content, wherein the second content exposes the user to at least one of a second virtual reality environment, a second augmented reality environment, and/or a second mixed reality environment; measuring the user's biometric activity during and/or after the user's exposure to the second content; creating a second biometric dataset corresponding to the user's biometric activity resulting from the user's exposure to the second content; comparing the second biometric dataset with at least one of the first biometric dataset and the baseline dataset, and determining whether at least one of the first content and the second content effect positive change in the user's biometric activity. Claim 11: a biometric tracker including a sensor, wherein the biometric tracker is designed to collect biometric data of the user; a device adapted to present the digital content to the user; and a controller including a processor configured to: process a first biometric data of the user collected by the biometric tracker to establish a baseline dataset for the user; extract at least one feature parameter from the digital content; collect a second biometric data of the user collected by the biometric tracker while the user is exposed to the digital content; process the second biometric data and the at least one feature parameter using an iteratively trained training module; determine an impact score for the digital content based on the baseline dataset and an output of the iteratively trained training module; and present, via the device, the impact score to the user Claim 15: a processor; a content database, the database containing a plurality of different virtual reality content, augmented reality content, and/or mixed reality content; a virtual reality device configured to provide one or more of the virtual reality content, the augmented reality content, and/or the mixed reality content to the user; at least one biometric activity monitor configured to measure the biometric activity of the user; and an application program comprising programming instructions that, when executed by the processor cause the system to: measure the user's initial biometric activity using the at least one biometric activity monitor; create a baseline dataset corresponding to the user's initial biometric activity; provide the user, through the virtual reality device, with a first content, wherein the first content comprises at least one of a first virtual reality content, a first augmented reality content, and/or a first mixed reality content; measure the user's biometric activity during and/or after the user's exposure to the first content using the at least one biometric activity monitor; create a first biometric dataset corresponding to the user's biometric activity resulting from the user's exposure to the first content; compare the first biometric dataset with the baseline dataset; provide the user, through the virtual reality device, with a second content, wherein the second content comprises at least one of a second virtual reality content, a second augmented reality content, and/or a second mixed reality content; measure the user's biometric activity during and/or after the user's exposure to the second content using the at least one biometric activity monitor; create a second biometric dataset corresponding to the user's biometric activity resulting from the user's exposure to the second content; compare the second biometric dataset with at least one of the first biometric dataset and the baseline dataset and; determine whether at least one of the first content and the second content effect positive change in the user's biometric activity. The difference between the present application and US11101031B2 is that the present application discloses the biometric tracker includes at least one sensing device; extracting at least one feature parameter from the digital content; generating, via an iteratively trained training model, a model output based on the second biometric data and the at least one feature parameter; determining an impact score for the digital content based on the baseline dataset and the model output; and presenting, via a display device, the impact score to the user which is obvious over Hill et al. (US20180190376A1) ([0037]; [0042]; [0034] & [0106]; [0008]; [0108]) with the motivation to evaluate whether the selected VR content as a positive effect on the psychological, psychiatric or medical condition of the user (See Hill, Abstract) Dependent claims 3 and 12 are an obvious variant of claims 6 and 17 of US11101031B2 for recitation of the biometric data including one or more of EEG data, heart rate data, respiratory data, blood pressure data, or skin temperature data. Dependent claim 4 is an obvious variant of claims 1 and 15 of US11101031B2 for reciting the digital content comprises at least one selected from the group consisting of virtual reality content, augmented reality content, mixed reality content. Dependent claim 5 is an obvious variant of claim 15 of US11101031B2 for reciting the display device is a virtual reality device. Dependent claim 13 is an obvious variant of claims 2 & 16 of US11101031B2 for reciting the biometric tracker includes one or more of an EEG monitor. The remaining dependent claims in the present application also recite substantially similar limitations to of US11101031B2 such as various types of feature parameters, generating scores, and recommending digital content to the user. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1 The claim(s) recite(s) subject matter within a statutory category as a process (claims 1-10) and a machine (claims 11-20). INDEPENDENT CLAIMS Step 2A Prong 1 Claim 1 recites steps of collecting, via a biometric tracker, a first biometric data of the user, wherein the biometric tracker includes at least one sensing device; processing the first biometric data to establish a baseline dataset for the user; extracting at least one feature parameter from the digital content; collecting, via the biometric tracker, a second biometric data of the user while the user is exposed to the digital content; generating, via an iteratively trained training model, a model output based on the second biometric data and the at least one feature parameter; determining an impact score for the digital content based on the baseline dataset and the model output; and presenting, via a display device, the impact score to the user. Claim 11 recites steps of a biometric tracker including a sensor, wherein the biometric tracker is designed to collect biometric data of the user; a device adapted to present the digital content to the user; and a controller including a processor configured to: process a first biometric data of the user collected by the biometric tracker to establish a baseline dataset for the user; extract at least one feature parameter from the digital content; collect a second biometric data of the user collected by the biometric tracker while the user is exposed to the digital content; process the second biometric data and the at least one feature parameter using an iteratively trained training module; determine an impact score for the digital content based on the baseline dataset and an output of the iteratively trained training module; and present, via the device, the impact score to the user. These steps for evaluating a biometric response of a user to digital content, as drafted, under the broadest reasonable interpretation, includes methods of organizing human activity. That is, nothing in the claim element precludes the italicized portions from managing personal behavior or relationships or interactions between people through organizing the activity around collection and processing biometric data to present an impact score to the user. This could be analogized to considering historical usage information while inputting data. If a claim limitation, under its broadest reasonable interpretation, covers performance as organizing human activity but for the recitation of generic computer components, then it falls within the “Methods of Organizing Human Activity” grouping of abstract ideas. Accordingly, the claim recites an abstract idea. Step 2A Prong 2 This judicial exception is not integrated into a practical application. In particular, the additional elements non-italicized portions identified above for claims 1 and 11, does not integrate the abstract idea into a practical application, other than the abstract idea per se, because the additional elements amount to no more than limitations which: amount to mere instructions to apply an exception (such as recitation of via a biometric tracker; wherein the biometric tracker includes at least one sensing device; via an iteratively trained training model; via a display device; a device adapted to present the digital content to the user; and, a controller including a processor configured amounts to invoking computers as a tool to perform the abstract idea, see MPEP 2106.05(f)) add insignificant extra-solution activity to the abstract idea (such as recitation of collecting […] a first biometric data of the user; and, collecting […] a second biometric data of the user while the user is exposed to the digital content amounts to mere data gathering since it does not add meaningful limitations to the collecting actions performed, see MPEP 2106.05(g)) Each of the above additional elements therefore only amounts to mere instructions to implement functions within the abstract idea using generic computer components or other machines within their ordinary capacity; and, add insignificant extra-solution activity to the abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. These elements are therefore not sufficient to integrate the abstract idea into a practical application. Therefore, the above claims, as a whole, are directed to an abstract idea. Step 2B The claim(s) do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to discussion of integration of the abstract idea into a practical application, the additional elements amount to no more than mere instructions to apply an exception, and add insignificant extra-solution activity to the abstract idea. Additionally, the additional limitations, other than the abstract idea per se, amount to no more than limitations which: amount to mere instructions to apply an exception in particular fields such as recitation of via a biometric tracker; wherein the biometric tracker includes at least one sensing device; via an iteratively trained training model; via a display device; a device adapted to present the digital content to the user; and, a controller including a processor configured, e.g., a commonplace business method or mathematical algorithm being applied on a general-purpose computer, Alice Corp. v. CLS Bank, MPEP 2106.05(f). amount to elements that have been recognized as well-understood, routine, and conventional activity in particular fields such as recitation of collecting […] a first biometric data of the user; and, collecting […] a second biometric data of the user while the user is exposed to the digital content, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i). Looking at the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. DEPENDENT CLAIMS Step 2A Prong 1 Dependent claims recite additional subject matter which further narrows or defines the abstract idea embodied in the claims (such as claims 2-10, and 12-20 reciting particular aspects for evaluating a biometric response of a user to digital content such as [Claim 2] wherein the first biometric data and the second biometric data includes electroencephalogram (EEG) data; [Claim 3] wherein the first biometric data and the second biometric data includes heart rate data; [Claim 4] wherein the digital content comprises at least one selected from the group consisting of virtual reality content, augmented reality content, mixed reality content, audio content, spatial content, video content, and/or audiovisual content digital content; [Claim 5] wherein the display device is a virtual reality device; [Claim 6] wherein the at least one feature parameter can include one or more of a color, a texture, a shape, a lighting feature, a volume, a speed, a proximity, a location, or a sound of the digital content; [Claim 7] wherein the impact score for the digital content is unique to the user; [Claim 8] receiving third-party digital content from a content database; generating, via the iteratively trained training module, a predicted impact score for the third-party digital content; and recommending the third-party digital content to the user based on the predicted impact score and the second biometric data; [Claim 9] generating a modified version of the third-party digital content based on the predicted impact score; and presenting, via the display device, the modified version of the third-party digital content based on the predicted impact score; [Claim 10] wherein generating the modified version of the third-party digital content includes altering a story arc of the third-party digital content; [Claim 12] wherein the biometric data includes one or more of EEG data, heart rate data, respiratory data, blood pressure data, functional magnetic resonance imaging data, near-infrared spectroscopy data, or skin temperature data; [Claim 13] wherein the biometric tracker includes one or more of an EEG monitor, a heart rate monitor, a respiratory monitor, a blood pressure monitor, or a skin temperature monitor; [Claim 14] wherein the processor is further configured to: receive, from the user, a target physiological state; generate, based on the impact score and the target physiological state, a modified version of the digital content; and present, via the device, the modified version of the digital content to the user; [Claim 15] wherein the at least one feature parameter can include one or more of a color, a texture, a shape, a lighting feature, a volume, a speed, a proximity, a location, or a sound of the content; [Claim 16] wherein the digital content is two-dimensional video content; and wherein the device is a television; [Claim 17] wherein the processor is further configured to: receive, from the user, a target psychological state; determine a difference between a current psychological state of the user and the target psychological state based on the second biometric data; retrieve, from a digital content database, therapeutic digital content based on the difference between the current psychological state of the user and the target psychological state; and recommend the therapeutic digital content to the user; [Claim 18] wherein the therapeutic digital content is stored in association with a predicted impact score; [Claim 19] wherein the processor is further configured to determine the predicted impact score for the therapeutic digital content based on historical biometric data associated with the user; [Claim 20] wherein the impact score for the digital content is unique to the user; these italicized portions are methods of organizing human activity since they merely describe types of data and determinations that can be performed by humans. Step 2A Prong 2 Dependent claims 4-5, 8-10, 13-14, and 16-19 recite additional subject matter which amount to limitations consistent with the additional elements in the independent claims (the additional limitations in claim 4 (virtual reality content, augmented reality content, mixed reality content, audio content, spatial content, video content, and/or audiovisual content digital content); claim 5 (wherein the display device is a virtual reality device); claim 8 (via the iteratively trained training module); claim 9 (generating a modified version of the third-party digital content; and, presenting, via the display device, the modified version of the third-party digital content); claim 10 (wherein generating the modified version of the third-party digital content includes altering a story arc of the third-party digital content); claim 13 (includes one or more of an EEG monitor, a heart rate monitor, a respiratory monitor, a blood pressure monitor, or a skin temperature monitor); claim 14 (generate […] a modified version of the digital content; and present, via the device, the modified version of the digital content to the user); claim 16 (wherein the digital content is two-dimensional video content; and wherein the device is a television); and, claim 19 (the processor is further configured) amounts to invoking computers as a tool to perform the abstract idea, see MPEP 2106.05(f)); and, claim 8 (receiving third-party digital content from a content database); claim 14 (receive, from the user, a target physiological state); claim 17 (receive, from the user, a target psychological state; and, retrieve, from a digital content database, therapeutic digital content); and, claims 18 (wherein the therapeutic digital content is stored in association with a predicted impact score) amounts to mere data gathering and storage since it does not add meaningful limitations to the receiving, retrieving, and storing, actions performed, see MPEP 2106.05(g))). Accordingly, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Step 2B Dependent claims 4-5, 8-10, 13-14, 16, and 19 recite additional subject matter which, as discussed above with respect to integration of the abstract idea into a practical application, amount to invoking computers as a tool to perform the abstract idea, e.g., a commonplace business method or mathematical algorithm being applied on a general-purpose computer, Alice Corp. v. CLS Bank, MPEP 2106.05(f). Also, [0025], [0028], and [0039]-[0040] which disclose off-the-shelf devices. Dependent claims 8, 14, and 17 recite additional subject matter which amounts to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, e.g., receiving or transmitting data over a network, Symantec, MPEP 2106.05(d)(II)(i); Dependent claim 18 amounts to elements that have been recognized as well-understood, routine, and conventional activity in particular fields, e.g., storing and retrieving information in memory, Versata Dev. Group, Inc., MPEP 2106.05(d)(II)(iv). There is no indication that these additional elements improve the functioning of a computer or improves any other technology. Their collective functions merely provide generic computer implementation. Therefore, in consideration of all the facts, the present invention is not a patent-eligible invention under USC 101. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Hill et al. (US20190198153A1) in view of Krishnan (US20230012960A1). Regarding claim 1, Hill discloses collecting, via a biometric tracker, a first biometric data of the user, wherein the biometric tracker includes at least one sensing device ([0036] “Sensors or monitors 22-30 can be configured to monitor, record and/or collect certain types of biometric data of a patient or user before, during and after the user engages with selected VR content through system 10.” Also, see Figure 2) processing the first biometric data to establish a baseline dataset for the user ([0033] “Processor 12 can also be configured to receive and process biometric data from monitors 22-30 and associated with a patient or user.” [0040] “At step 104, a baseline biometric dataset can be created for the patient based on the initial biometric data recorded at step 102.”) extracting at least one feature parameter from the digital content ([0033] “Processor 12 can be configured to communicate with VR content database module 16 in order to access and transmit VR content based on determined parameters or instructions associated with a patient or user.” [0041] “The VR content can include any number of different components or features, including but not limited to visual stimuli, color, lighting, movement, camera angle, sound, music, voice, pacing, timing, characters, story arc, and script, aimed at influencing a patient's emotional, psychological and/or psychiatric state.”) collecting, via the biometric tracker, a second biometric data of the user while the user is exposed to the digital content ([0106] “with biometric devices (such as monitors 22-30 in system 10)" [0044] “At step 116, the patient's biometrics may be measured and recorded during and/or after exposure to the second VR content in a similar manner to steps 102 and 108. At step 118, a second biometric dataset for the patient can be created based on the patient's biometric data measured during step 116.”) determining an impact score for the digital content based on the baseline dataset and the model output ([0008] “The user's EEG-type biometric data may be measured during and/or after exposure to certain VR content and compared to previously measured EEG-type biometric data of the user to produce z-scores of change for specific brainwave types (i.e., alpha, delta, theta, etc.) in certain regions of the user's brain. The z-scores may then be used to identify statistically significant changes in the user's EEG-type biometric data (such as by identifying z-scores greater than or equal to 1.0”) Hill does not explicitly disclose however Krishnan teaches generating, via an iteratively trained training model, a model output based on the second biometric data and the at least one feature parameter ([0066] “In some implementations, the score generator 143 (e.g., a deep learning model or an XGBoost model) can be configured to iteratively receive a subset of user data from the set of past user data, a subset of setting data from the set of past setting data, and/or a subset of drug data from the set of past drug data described above and generate an output.”) and presenting, via a display device, the impact score to the user ([0079] “an indication of state of the user (e.g., a score) […] Therefore, in some instances, the score generator 143 can determine whether a current state of the adaptive setting is successful in inducing or maintaining an optimal set. In some implementations, the therapy device 110 can include a feedback system indicator to display, to the use.”) Therefore, it would have obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to include in the system of Hill generating, via an iteratively trained training model, a model output based on the second biometric data and the at least one feature parameter; and, presenting, via a display device, the impact score to the user as taught by Krishnan since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art. Regarding claim 2, Hill discloses wherein the first biometric data and the second biometric data includes electroencephalogram (EEG) data ([0040] “the biometrics recorded for the patient may include EEG readings, heart rate, blood pressure, respiratory rate, and skin temperature).”) Regarding claim 3, Hill discloses wherein the first biometric data and the second biometric data includes heart rate data ([0040] “the biometrics recorded for the patient may include EEG readings, heart rate, blood pressure, respiratory rate, and skin temperature).”) Regarding claim 4, Hill discloses wherein the digital content comprises at least one selected from the group consisting of virtual reality content, augmented reality content, mixed reality content, audio content, spatial content, video content, and/or audiovisual content digital content ([0006] “The present invention is directed generally to systems and methods for using virtual reality ("VR"), augmented reality ("AR") and/or mixed reality ("MR") content in the therapeutic treatment of psychological, psychiatric or other medical conditions in patients).”) Regarding claim 5, Hill discloses wherein the display device is a virtual reality device ([0032] “As shown in FIG. 1, system 10 may include a processor 12, a VR device 14”) Regarding claim 6, Hill discloses wherein the at least one feature parameter can include one or more of a color, a texture, a shape, a lighting feature, a volume, a speed, a proximity, a location, or a sound of the digital content ([0041] “The VR content can include any number of different components or features, including but not limited to visual stimuli, color, lighting, movement, camera angle, sound, music, voice, pacing, timing, characters, story arc, and script, aimed at influencing a patient's emotional, psychological and/or psychiatric state).”) Regarding claim 7, Hill discloses wherein the impact score for the digital content is unique to the user ([0059] “the EEG biometric data of the user may be measured during and/or after exposure to the VR content and compared to previously measured EEG biometric data of the user to produce z-scores of change for specific brainwave types (alpha, delta, theta, etc.) in certain regions of the brain” [0101] The BRAlNAVATAR® system has a feature called Z-builder which allows for the conversion of a raw EEG file into a quantitative reference file. Post-VR (or During-VR) EEG data can be compared to the Pre-VR reference file producing z scores of change for each variable at each region of interest for each subject).”) Regarding claim 8, Hill discloses receiving third-party digital content from a content database ([0033] Processor 12 can be configured to communicate with VR content database module 16 in order to access and transmit VR content based on determined parameters or instructions associated with a patient or user.” [0035] “VR content database module 16 may comprise a reference library containing a plurality of categorized VR content”) generating, […], a predicted impact score for the third-party digital content ([0033] “Processor 12 can be configured to communicate with biometric reference database module 18 in order to access and utilize and biometric data and algorithms for analyzing and processing a patient's biometric data received by system 10.” [0100] specific VR content experiences have been shown to result in predictable changes in biometric and brainwave patterns. As also described above, this may be assessed by comparing pre-VR biometric data to post-VR biometric data”) and recommending the third-party digital content to the user based on the predicted impact score and the second biometric data ([0101] “Post-VR (or During-VR) EEG data can be compared to the Pre-VR reference file producing z scores of change for each variable at each region of interest for each subject.” [0103] “a user's biometrics can be assessed for patterns associated with specific concerns (anxiety, depression, pain, etc.). Based on these results, the user may be offered a selection of experiences specifically designed to address areas of concern/biometric patterns. For example, users demonstrating the certain types of profiles (such as stress reduction/pain reduction 300, mindfulness 302, focus 304, quiet mind 306 and open heart 308) will be recommended the VR content specific to such profiles.”) Hill does not explicitly disclose however Krishnan teaches via the iteratively trained training module ([0066] “In some implementations, the score generator 143 (e.g., a deep learning model or an XGBoost model) can be configured to iteratively Therefore, it would have obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to include in the system of Hill via the iteratively trained training module as taught by Krishnan since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art. Regarding claim 9, Hill discloses generating a modified version of the third-party digital content based on the predicted impact score ([0050] “If, however, it is determined at step 210 that the changes in the user's biometric data corresponding to the currently selected VR content do not exceed the threshold requirements, the method proceeds to step 214. At step 214, a modified selected VR content is provided to the user in place of the previously selected VR content.” [0058] “As further provided in method 200 at steps 212-214, when it is determined by analyzing the user's biometric data that the VR content is not providing the desired changes in the user's biometric data, the VR content can be modified or altered to provide a more suitable VR content.”) and presenting, via the display device, the modified version of the third-party digital content based on the predicted impact score ([0048] “Then at step 210, it is determined whether the changes in the user's biometric data as calculated at step 208 exceed specific threshold requirements.” [0050] “At step 214, a modified selected VR content is provided to the user in place of the previously selected VR content.” [0051] A notification may be generated and provided to the user within the currently selected VR content.”) Regarding claim 10, Hill discloses wherein generating the modified version of the third-party digital content includes altering a story arc of the third-party digital content ([0053] the adjustments to the VR content may be configured to vary the […] story arc) Regarding claim 11, Hill discloses a biometric tracker including a sensor, wherein the biometric tracker is designed to collect biometric data of the user ([0036] Sensors or monitors 22-30 can be configured to monitor, record and/or collect certain types of biometric data of a patient or user before, during and after the user engages with selected VR content through system 10.”) a device adapted to present the digital content to the user ([0034] “According to one embodiment, VR device 14 can be configured as a headset that is worn over a user's eyes like a pair of googles.”) and a controller including a processor configured to: process a first biometric data of the user collected by the biometric tracker to establish a baseline dataset for the user ([0033] “Processor 12 can also be configured to receive and process biometric data from monitors 22-30 and associated with a patient or user.” [0040] “At step 104, a baseline biometric dataset can be created for the patient based on the initial biometric data recorded at step 102.”) extract at least one feature parameter from the digital content ([0033] “Processor 12 can be configured to communicate with VR content database module 16 in order to access and transmit VR content based on determined parameters or instructions associated with a patient or user.” [0041] “The VR content can include any number of different components or features, including but not limited to visual stimuli, color, lighting, movement, camera angle, sound, music, voice, pacing, timing, characters, story arc, and script, aimed at influencing a patient's emotional, psychological and/or psychiatric state.”) collect a second biometric data of the user collected by the biometric tracker while the user is exposed to the digital content ([0044] “At step 116, the patient's biometrics may be measured and recorded during and/or after exposure to the second VR content in a similar manner to steps 102 and 108. At step 118, a second biometric dataset for the patient can be created based on the patient's biometric data measured during step 116.”) determine an impact score for the digital content based on the baseline dataset and an output of the iteratively trained training module ([0008] “The user's EEG-type biometric data may be measured during and/or after exposure to certain VR content and compared to previously measured EEG-type biometric data of the user to produce z-scores of change for specific brainwave types (i.e., alpha, delta, theta, etc.) in certain regions of the user's brain. The z-scores may then be used to identify statistically significant changes in the user's EEG-type biometric data (such as by identifying z-scores greater than or equal to 1.0”) Hill does not explicitly disclose however Krishnan teaches process the second biometric data and the at least one feature parameter using an iteratively trained training module ([0066] “In some implementations, the score generator 143 (e.g., a deep learning model or an XGBoost model) can be configured to iteratively receive a subset of user data from the set of past user data, a subset of setting data from the set of past setting data, and/or a subset of drug data from the set of past drug data described above and generate an output.”) and present, via the device, the impact score to the user ([0079] “an indication of state of the user (e.g., a score) […] Therefore, in some instances, the score generator 143 can determine whether a current state of the adaptive setting is successful in inducing or maintaining an optimal set. In some implementations, the therapy device 110 can include a feedback system indicator to display, to the use.”) Therefore, it would have obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to include in the system of Hill process the second biometric data and the at least one feature parameter using an iteratively trained training module; and, presenting, via a display device, the impact score to the user as taught by Krishnan since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art. Regarding claim 12, Hill discloses wherein the biometric data includes one or more of EEG data, heart rate data, respiratory data, blood pressure data, functional magnetic resonance imaging data, near-infrared spectroscopy data, or skin temperature data ([0040] “the biometrics recorded for the patient may include EEG readings, heart rate, blood pressure, respiratory rate, and skin temperature).”) Regarding claim 13, Hill discloses wherein the biometric tracker includes one or more of an EEG monitor, a heart rate monitor, a respiratory monitor, a blood pressure monitor, or a skin temperature monitor ([0036] “According to one embodiment, system 10 includes an electroencephalogram (EEG) monitor 22 to monitor the EEG activity of a user.”) Regarding claim 14, Hill discloses wherein the processor is further configured to: receive, from the user, a target physiological state ([0104] “According to one embodiment of the present invention, a user may begin by taking a questionnaire and/or symptom checklist configured to determine the user's specific goals/needs (such as stress reduction, improving focus, etc.)”) generate, based on the impact score and the target physiological state, a modified version of the digital content ([0059] “identify statistically significant z-score changes in a user's biometric data resulting from exposure to selected VR content.” [0105] “the user may be presented with a library of VR content designed to address the user's specific goals/needs.”) and present, via the device, the modified version of the digital content to the user ([0108] “Based on the results of […] biometric data analysis, additional VR content options may be offered to further achieve the desired effect.”) Regarding claim 15, Hill discloses wherein the at least one feature parameter can include one or more of a color, a texture, a shape, a lighting feature, a volume, a speed, a proximity, a location, or a sound of the content ([0041] “The VR content can include any number of different components or features, including but not limited to visual stimuli, color, lighting, movement, camera angle, sound, music, voice, pacing, timing, characters, story arc, and script, aimed at influencing a patient's emotional, psychological and/or psychiatric state.”) Regarding claim 16, Hill discloses wherein the digital content is two-dimensional video content ([0034] “VR content can be monoscopic”) Hill does not explicitly disclose however Krishnan teaches wherein the device is a television ([0075] “a display device (e.g., a television screen)”) Therefore, it would have obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to include in the system of Hill the device is a television as taught by Krishnan since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art. Regarding claim 17, Hill discloses wherein the processor is further configured to: receive, from the user, a target psychological state ([0104] “According to one embodiment of the present invention, a user may begin by taking a questionnaire and/or symptom checklist configured to determine the user's specific goals/needs (such as stress reduction, improving focus, etc.)”) determine a difference between a current psychological state of the user and the target psychological state based on the second biometric data ([0044] “After the second biometric dataset is created, it can be analyzed and compared to the patient's first biometric dataset and/or the patient's baseline biometric dataset at step 120 to determine the effect the second VR content had on the patient's biometrics associated with psychological, psychiatric or other medical conditions.”) retrieve, from a digital content database, therapeutic digital content based on the difference between the current psychological state of the user and the target psychological state ([0046] “The selected VR content may be chosen based on the specific type of emotional, psychological and/or psychiatric state to be addressed for the user. According to one embodiment, the selected VR content is chosen from a library database containing a plurality of VR content categorized based on the content's ability to influence positive change in certain types of emotional, psychological and/or psychiatric states.”) and recommend the therapeutic digital content to the user ([0046] “providing VR content to a user as therapeutic content” Also, see Figure 4) Regarding claim 18, Hill does not explicitly disclose however Krishnan teaches wherein the therapeutic digital content is stored in association with a predicted impact score ([0080] “In some instances, the adaptive setting model can guide or actively induce/maintain optimal set in the user during a digital therapy” [0082] “The profile updater 144 can then receive an indication of state of the user (e.g., the score) from the score generator 143 and in response to various states of the adaptive setting. and update a user profile of the user to generate an updated user profile. The updated user profile can be stored in the memory 131.”) Therefore, it would have obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to include in the system of Hill the therapeutic digital content stored in association with a predicted impact score as taught by Krishnan since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art. Regarding claim 19, Hill discloses wherein the processor is further configured to determine the predicted impact score for the therapeutic digital content based on historical biometric data associated with the user ([0100] “specific VR content experiences have been shown to result in predictable changes in biometric and brainwave patterns. As also described above, this may be assessed by comparing pre-VR biometric data to post-VR biometric data).”) Regarding claim 20, Hill discloses wherein the impact score for the digital content is unique to the user ([0059] “the EEG biometric data of the user may be measured during and/or after exposure to the VR content and compared to previously measured EEG biometric data of the user to produce z-scores of change for specific brainwave types (alpha, delta, theta, etc.) in certain regions of the brain” [0101] The BRAlNAVATAR® system has a feature called Z-builder which allows for the conversion of a raw EEG file into a quantitative reference file. Post-VR (or During-VR) EEG data can be compared to the Pre-VR reference file producing z scores of change for each variable at each region of interest for each subject).”) Prior Art Cited but Not Relied Upon Wu, J. Y., Tsai, Y. Y., Chen, Y. J., Hsiao, F. C., Hsu, C. H., Lin, Y. F., & Liao, L. D. (2025). Digital transformation of mental health therapy by integrating digitalized cognitive behavioral therapy and eye movement desensitization and reprocessing. Medical & Biological Engineering & Computing, 63(2), 339-354. This reference is relevant because it discloses recommending and generating digital behavior therapy content based on biometric data. US20220061757A1 This reference is relevant because it discloses collecting biometric data and generating a score to help adjust the digital content. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WINSTON FURTADO whose telephone number is (571)272-5349. The examiner can normally be reached Monday-Friday 8:00 AM to 4:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid can be reached at (571) 270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WINSTON R FURTADO/Examiner, Art Unit 3687
Read full office action

Prosecution Timeline

Aug 21, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12555685
System and Method for Detecting and Predicting Surgical Wound Infections
2y 5m to grant Granted Feb 17, 2026
Patent 12456548
SYSTEMS AND METHODS FOR GRAPHICAL USER INTERFACES FOR ADEQUACY OF ANESTHESIA
2y 5m to grant Granted Oct 28, 2025
Patent 12431235
Automatic Identification of, and Responding to, Cognition Impairment
2y 5m to grant Granted Sep 30, 2025
Patent 12343085
METHODS FOR IMPROVED SURGICAL PLANNING USING MACHINE LEARNING AND DEVICES THEREOF
2y 5m to grant Granted Jul 01, 2025
Patent 12020786
MODEL FOR HEALTH RECORD CLASSIFICATION
2y 5m to grant Granted Jun 25, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
19%
Grant Probability
46%
With Interview (+26.2%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 145 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month