Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The amendment filed on 6-4-25 has been entered and fully considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6, 7, 13-15, 17, 18, 21, and 22 are rejected under 35 U.S.C. 102a1 as being anticipated by Samec et al. (US 2021/0398357).
Regarding claim 1, Samec (Fig. 1 and 9) discloses a method, comprising:
obtaining information (in step 1720, eg. obtaining a “user reaction to stimulus” as discussed in [0545]) characterizing at least one user interacting with a virtual environment (“display virtual content to a user” discussed in [0400], see also “virtual reality display system 80” discussed in [0543]) comprised of a plurality of virtual objects (“the user to perform gestures that interact with physical and/or virtual objects (e.g., using a tool)” discussed in [0618]);
applying the information to at least one analytics engine (in 1730, eg. “the user's level of alertness may then be analyzed” discussed in [0545]) to obtain a sentiment status indicating a sentiment of the at least one user in the virtual environment (eg. if a user is in a “high-stress environments” as discussed in [0530], see also determining if the user is “consciousness, drowsiness, fatigue, unconsciousness” discussed in [0545], and detecting “that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” as discussed in [0550]) and at least one adaptation of one or more of the plurality of virtual objects (eg. a “masking” adaptation, discussed in [0530]) in the virtual environment based at least in part on the sentiment status (eg. based on determining the user is in a “high-stress environment” as discussed above, see [0530]), wherein an application of a given one of the at least one adaptation (the “masking” discussed above), determined by the at least one analytics engine (“analyzed” discussed in [0545]), based at least in part on the sentiment status (eg. if “the display system determines that the user sees an object that is known to elicit a strong negative emotional reaction” as discussed in [0530]), of one or more of the plurality of virtual objects in the virtual environment is one or more of suppressed and delayed (“an image presented later in time to mask an image presented earlier in time” discussed in [0527], which the examiner interprets as reading upon the claimed “suppressed”) in response to the given adaptation impacting an area of focus in the virtual environment of one or more of the at least one user (in response to when “the display system determines that the user sees an object that is known to elicit a strong negative emotional reaction” as discussed in [0530], which the examiner interprets as reading on the claimed “area of focus”); and
automatically initiating an update of a rendering of the virtual environment using the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment (eg. “display a masked image that has previously been established to calm the user” as discussed in [0530]);
wherein the method is performed by at least one processing device (140) comprising a processor coupled to a memory (“local processing and data module 140 may comprise a hardware processor, as well as digital memory, such as non-volatile memory (e.g., flash memory or hard disk drives)” discussed in [0472]).
Regarding claim 13, Samec (Fig. 1 and 9) discloses an apparatus, comprising:
at least one processing device (140) comprising a processor coupled to a memory (“local processing and data module 140 may comprise a hardware processor, as well as digital memory, such as non-volatile memory (e.g., flash memory or hard disk drives)” discussed in [0472]);
the at least one processing device being configured to implement the following steps (as discussed in [0859], the processing device executes the method described below):
obtaining information (in step 1720, eg. obtaining a “user reaction to stimulus” as discussed in [0545]) characterizing at least one user interacting with a virtual environment (“display virtual content to a user” discussed in [0400], see also “virtual reality display system 80” discussed in [0543]) comprised of a plurality of virtual objects (“the user to perform gestures that interact with physical and/or virtual objects (e.g., using a tool)” discussed in [0618]);
applying the information to at least one analytics engine (in 1730, eg. “the user's level of alertness may then be analyzed” discussed in [0545]) to obtain a sentiment status indicating a sentiment of the at least one user in the virtual environment (eg. if a user is in a “high-stress environments” as discussed in [0530], see also determining if the user is “consciousness, drowsiness, fatigue, unconsciousness” discussed in [0545], and detecting “that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” as discussed in [0550]) and at least one adaptation of one or more of the plurality of virtual objects (eg. a “masking” adaptation, discussed in [0530]) in the virtual environment based at least in part on the sentiment status (eg. based on determining the user is in a “high-stress environment” as discussed above, see [0530]), wherein an application of a given one of the at least one adaptation (the “masking” discussed above), determined by the at least one analytics engine (“analyzed” discussed in [0545]), based at least in part on the sentiment status (eg. if “the display system determines that the user sees an object that is known to elicit a strong negative emotional reaction” as discussed in [0530]), of one or more of the plurality of virtual objects in the virtual environment is one or more of suppressed and delayed (“an image presented later in time to mask an image presented earlier in time” discussed in [0527], which the examiner interprets as reading upon the claimed “suppressed”) in response to the given adaptation impacting an area of focus in the virtual environment of one or more of the at least one user (in response to when “the display system determines that the user sees an object that is known to elicit a strong negative emotional reaction” as discussed in [0530], which the examiner interprets as reading on the claimed “area of focus”); and
automatically initiating an update of a rendering of the virtual environment using the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment (eg. “display a masked image that has previously been established to calm the user” as discussed in [0530]).
Regarding claim 14, Samec (Fig. 1 and 9) discloses a non-transitory processor-readable storage medium (“non-transitory computer-readable medium” discussed in [0861]) having stored therein program code of one or more software programs (“code module may be compiled and linked into an executable program” discussed in [0859], and “Code modules or any type of data may be stored on any type of non-transitory computer-readable medium” discussed in [0861]), wherein the program code when executed by at least one processing device (140) causes the at least one processing device to perform the following steps:
obtaining information (in step 1720, eg. obtaining a “user reaction to stimulus” as discussed in [0545]) characterizing at least one user interacting with a virtual environment (“display virtual content to a user” discussed in [0400], see also “virtual reality display system 80” discussed in [0543]) comprised of a plurality of virtual objects (“the user to perform gestures that interact with physical and/or virtual objects (e.g., using a tool)” discussed in [0618]);
applying the information to at least one analytics engine (in 1730, eg. “the user's level of alertness may then be analyzed” discussed in [0545]) to obtain a sentiment status indicating a sentiment of the at least one user in the virtual environment (eg. if a user is in a “high-stress environments” as discussed in [0530], see also determining if the user is “consciousness, drowsiness, fatigue, unconsciousness” discussed in [0545], and detecting “that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” as discussed in [0550]) and at least one adaptation of one or more of the plurality of virtual objects (eg. a “masking” adaptation, discussed in [0530]) in the virtual environment based at least in part on the sentiment status (eg. based on determining the user is in a “high-stress environment” as discussed above, see [0530]), wherein an application of a given one of the at least one adaptation (the “masking” discussed above), determined by the at least one analytics engine (“analyzed” discussed in [0545]), based at least in part on the sentiment status (eg. if “the display system determines that the user sees an object that is known to elicit a strong negative emotional reaction” as discussed in [0530]), of one or more of the plurality of virtual objects in the virtual environment is one or more of suppressed and delayed (“an image presented later in time to mask an image presented earlier in time” discussed in [0527], which the examiner interprets as reading upon the claimed “suppressed”) in response to the given adaptation impacting an area of focus in the virtual environment of one or more of the at least one user (in response to when “the display system determines that the user sees an object that is known to elicit a strong negative emotional reaction” as discussed in [0530], which the examiner interprets as reading on the claimed “area of focus”); and
automatically initiating an update of a rendering of the virtual environment using the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment (eg. “display a masked image that has previously been established to calm the user” as discussed in [0530]).
Regarding claim 2, Samec discloses a method as discussed above, wherein the information characterizing the at least one user comprises one or more of sensor data associated with the at least one user interacting with the virtual environment (eg. “inward facing cameras 24 may detect that a user's eyes are not rotating or accommodating in response to a changing image presented by light sources 26” as discussed in [0550]) and information characterizing an avatar representation of the at least one user interacting with the virtual environment (this limitation is not being examined due to the alternative language “one or more of”).
Regarding claim 3, Samec discloses a method as discussed above, wherein the sentiment status comprises one or more of an anxious status (when the user needs to be “calmed” as discussed in [0530]), a positive status (“attributed to positive social or behavioral experience/association with the object in the reassembled image” discussed in [0507]), a negative status (“the user is distracted, unfocused, or otherwise exhibiting diminished attention” as discussed in [0550]) and a neutral status (“the user may have no reaction to particular stimulus” and “stimulus that previously elicited a neutral response” discussed in [0836]).
Regarding claim 4, Samec discloses a method as discussed above, further comprising obtaining an engagement level of the at least one user (eg. their level of attention, with a “user's level of attention” and “performed as an assessment of a student's engagement” discussed in [0549]) and determining at least one adaptation of one or more of the plurality of virtual objects in the virtual environment based at least in part on the engagement level (eg. if attention is impaired then displaying “visual content likely to gain the attention of the user” as discussed in [0548]).
Regarding claim 6, Samec discloses a method as discussed above, wherein the at least one analytics engine comprises at least one reinforcement learning agent that determines the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment (eg. “bathe the user's view of the environment with a pleasing color” discussed in [0839]) based at least in part on the sentiment status using a plurality of states of the virtual environment and a reward function (“present positive reinforcement when a user remains focused on a task for a specified time period, and may provide negative reinforcement if the user is frequently distracted” discussed in [0558], with “provide rewards” more specifically discussed in [0837]).
Regarding claim 7, Samec discloses a method as discussed above, wherein the at least one reinforcement learning agent traverses the plurality of states (“detect a user's physical state” discussed in [0834], eg. a state “when a user remains focused on a task for a specified time period” or “if the user is frequently distracted” as discussed in [0558]) and is trained to select a particular action for a given state (“present positive reinforcement when a user remains focused on a task for a specified time period” discussed in [0558]), wherein the particular action comprises a given virtual environment adaptation that modifies one or more parameters of the virtual environment (“for example, positive reinforcement may take the form of pleasing audible tones, colors, and/or imagery that are outputted by the display system” discussed in [0839]).
Regarding claim 15, Samec discloses a non-transitory processor-readable storage medium as discussed above, and further discloses obtaining an engagement level of the at least one user (eg. their level of attention, with a “user's level of attention” and “performed as an assessment of a student's engagement” discussed in [0549]) and determining at least one adaptation of one or more of the plurality of virtual objects in the virtual environment based at least in part on the engagement level (eg. if attention is impaired then displaying “visual content likely to gain the attention of the user” as discussed in [0548]).
Regarding claim 17, Samec discloses a non-transitory processor-readable storage medium as discussed above, wherein the at least one analytics engine comprises at least one reinforcement learning agent that determines the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment based at least in part on the sentiment status using a plurality of states of the virtual environment and a reward function (“present positive reinforcement when a user remains focused on a task for a specified time period, and may provide negative reinforcement if the user is frequently distracted” discussed in [0558], with “provide rewards” more specifically discussed in [0837]).
Regarding claim 18, Samec discloses a non-transitory processor-readable storage medium as discussed above, wherein the at least one reinforcement learning agent traverses the plurality of states (“detect a user's physical state” discussed in [0834], eg. a state “when a user remains focused on a task for a specified time period” or “if the user is frequently distracted” as discussed in [0558]) and is trained to select a particular action for a given state (“present positive reinforcement when a user remains focused on a task for a specified time period” discussed in [0558]), wherein the particular action comprises a given virtual environment adaptation that modifies one or more parameters of the virtual environment (“for example, positive reinforcement may take the form of pleasing audible tones, colors, and/or imagery that are outputted by the display system” discussed in [0839]).
Regarding claim 21, Samec discloses a non-transitory processor-readable storage medium as discussed above, wherein the information characterizing the at least one user comprises one or more of sensor data associated with the at least one user interacting with the virtual environment (eg. “inward facing cameras 24 may detect that a user's eyes are not rotating or accommodating in response to a changing image presented by light sources 26” as discussed in [0550]) and information characterizing an avatar representation of the at least one user interacting with the virtual environment (this limitation is not being examined due to the alternative language “one or more of”).
Regarding claim 22, Samec discloses a non-transitory processor-readable storage medium as discussed above, wherein the sentiment status comprises one or more of an anxious status (when the user needs to be “calmed” as discussed in [0530]), a positive status (“attributed to positive social or behavioral experience/association with the object in the reassembled image” discussed in [0507]), a negative status (“the user is distracted, unfocused, or otherwise exhibiting diminished attention” as discussed in [0550]) and a neutral status (“the user may have no reaction to particular stimulus” and “stimulus that previously elicited a neutral response” discussed in [0836]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Samec as applied to claims 1 and 14 above, and further in view of Agarwal et al. (US 2022/0051479).
Regarding claim 5, Samec discloses a method as discussed above, wherein the at least one analytics engine determines the sentiment status indicating the sentiment of the at least one user in the virtual environment (eg. for determining if the user is “consciousness, drowsiness, fatigue, unconsciousness” as discussed in [0545], see also detecting “that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” as discussed in [0550]) by processing an audio-based sentiment (“monitor the user through additional sensors 30, such as a microphone, to detect spoken responses or other sounds (e.g., a yawn, sigh, involuntary sound, or the like)” discussed in [0550]) and a video-based sentiment indicating the sentiment of the at least one user in the virtual environment (“inward facing cameras 24 may detect that a user's eyes are not rotating or accommodating in response to a changing image presented by light sources 26, indicating that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” discussed in [0550]).
However, Samec fails to teach or suggest wherein the at least one analytics engine comprises “at least one ensemble model” that “determines the sentiment status by processing an audio-based sentiment score and a video-based sentiment score.”
Agarwal discloses a method wherein at least one analytics engine (144) comprises at least one ensemble model (“using one or more ML models” discussed in [0043]) that determines the sentiment status indicating the sentiment of the at least one user (“a score or rating indicating the user sentiment” discussed in [0043]) in the virtual environment (“virtual display of a virtual scene” as discussed in [0007]), wherein the at least one ensemble model determines the sentiment status by processing an audio-based sentiment (“perform the audio level or audio state detection using one or more ML models” discussed in [0043]) and a video-based sentiment (“one or more ML models may be configured to receive the image data that includes the user's face and to generate an output that indicates a label of the facial state” discussed in [0043]) indicating the sentiment of the at least one user in the virtual environment (as discussed above, eg. “a score or rating indicating the user sentiment” discussed in [0043]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samec so the at least one analytics engine comprises at least one ensemble model that determines the sentiment status indicating the sentiment of the at least one user in the virtual environment by processing an audio-based sentiment score and a video-based sentiment score indicating the sentiment of the at least one user in the virtual environment as taught by Agarwal because this allows the user sentiment detection to be improved, by updating and training the model (eg. to “tune parameters associated with the ML models 138” as discussed in [0044]).
Regarding claim 16, Samec discloses a non-transitory processor-readable storage medium as discussed above, wherein the at least one analytics engine determines the sentiment status indicating the sentiment of the at least one user in the virtual environment (eg. for determining if the user is “consciousness, drowsiness, fatigue, unconsciousness” as discussed in [0545], see also detecting “that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” as discussed in [0550]) by processing an audio-based sentiment (“monitor the user through additional sensors 30, such as a microphone, to detect spoken responses or other sounds (e.g., a yawn, sigh, involuntary sound, or the like)” discussed in [0550]) and a video-based sentiment indicating the sentiment of the at least one user in the virtual environment (“inward facing cameras 24 may detect that a user's eyes are not rotating or accommodating in response to a changing image presented by light sources 26, indicating that the user is distracted, unfocused, or otherwise exhibiting diminished attention toward the imagery” discussed in [0550]).
However, Samec fails to teach or suggest wherein the at least one analytics engine comprises “at least one ensemble model” that “determines the sentiment status by processing an audio-based sentiment score and a video-based sentiment score.”
Agarwal discloses a non-transitory processor-readable storage medium (“non-transitory computer-readable storage medium” discussed in [0013]) wherein at least one analytics engine (144) comprises at least one ensemble model (“using one or more ML models” discussed in [0043]) that determines the sentiment status indicating the sentiment of the at least one user (“a score or rating indicating the user sentiment” discussed in [0043]) in the virtual environment (“virtual display of a virtual scene” as discussed in [0007]), wherein the at least one ensemble model determines the sentiment status by processing an audio-based sentiment (“perform the audio level or audio state detection using one or more ML models” discussed in [0043]) and a video-based sentiment (“one or more ML models may be configured to receive the image data that includes the user's face and to generate an output that indicates a label of the facial state” discussed in [0043]) indicating the sentiment of the at least one user in the virtual environment (as discussed above, eg. “a score or rating indicating the user sentiment” discussed in [0043]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samec so the at least one analytics engine comprises at least one ensemble model that determines the sentiment status indicating the sentiment of the at least one user in the virtual environment by processing an audio-based sentiment score and a video-based sentiment score indicating the sentiment of the at least one user in the virtual environment as taught by Agarwal because this allows the user sentiment detection to be improved, by updating and training the model (eg. to “tune parameters associated with the ML models 138” as discussed in [0044]).
Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Samec as applied to claims 1 and 14 above, and further in view of Bratty et al. (US 2022/0262504).
Regarding claim 8, Samec discloses a method as discussed above, however fails to teach or suggest wherein the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment comprises presenting, to one or more of the at least one user, a sentiment-based phrase selected from a plurality of sentiment-based phrases in the virtual environment.
Bratty discloses a method wherein at least one adaptation of one or more of the plurality of virtual objects in the virtual environment (“encouragement may be provided preferably from within the virtual environment” discussed in [0245]) comprises presenting, to one or more of the at least one user, a sentiment-based phrase (corresponding to the “audible message” discussed in [0238] and “supportive messages” discussed in [0241]) selected from a plurality of sentiment-based phrases in the virtual environment (eg. selecting from phrases 604, 608, and 612, see “rewarded by the arrangement by e.g. supportive messages” as discussed in [0241], and “issue supportive and encouraging statements towards the user (statements may be selected from a plurality of options associated with different user statuses/performances)” discussed in [0133]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samec so the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment comprises presenting, to one or more of the at least one user, a sentiment-based phrase selected from a plurality of sentiment-based phrases in the virtual environment as taught by Bratty because Samec and Bratty are each directed towards providing reinforcement to a user (“the positive and negative reinforcement may be provided as… audible content” discussed in [0839] of Samec and “verbal reinforcement/encouraging messages” discussed in [0237] of Bratty) and this can “motivate goal relevant behavior, which may get the user to better keep up with the schedule” (see [0237] of Bratty).
Regarding claim 19, Samec discloses a non-transitory processor-readable storage medium as discussed above, however fails to teach or suggest wherein the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment comprises presenting, to one or more of the at least one user, a sentiment-based phrase selected from a plurality of sentiment-based phrases in the virtual environment.
Bratty discloses a non-transitory processor-readable storage medium (“non-transitory computer-readable carrier medium” discussed in [0030]) wherein at least one adaptation of one or more of the plurality of virtual objects in the virtual environment (“encouragement may be provided preferably from within the virtual environment” discussed in [0245]) comprises presenting, to one or more of the at least one user, a sentiment-based phrase (corresponding to the “audible message” discussed in [0238] and “supportive messages” discussed in [0241]) selected from a plurality of sentiment-based phrases in the virtual environment (eg. selecting from phrases 604, 608, and 612, see “rewarded by the arrangement by e.g. supportive messages” as discussed in [0241], and “issue supportive and encouraging statements towards the user (statements may be selected from a plurality of options associated with different user statuses/performances)” discussed in [0133]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samec so the at least one adaptation of one or more of the plurality of virtual objects in the virtual environment comprises presenting, to one or more of the at least one user, a sentiment-based phrase selected from a plurality of sentiment-based phrases in the virtual environment as taught by Bratty because Samec and Bratty are directed towards providing reinforcement to a user (“the positive and negative reinforcement may be provided as… audible content” discussed in [0839] of Samec and “verbal reinforcement/encouraging messages” discussed in [0237] of Bratty) and this can “motivate goal relevant behavior, which may get the user to better keep up with the schedule” (see [0237] of Bratty).
Claims 10 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Samec as applied to claim 1 above, and further in view of Fear (US 2022/0040573).
Regarding claim 10, Samec discloses a method as discussed above, however fail to teach or suggest wherein the virtual environment is generated by performing the followings steps:
obtaining, from one or more users, session information characterizing one or more requirements of a session of a virtual environment;
extracting the one or more requirements of the virtual environment from the session information;
generating an initial population comprising a plurality of general virtual objects for placement in the virtual environment that satisfy the one or more requirements of the session of the virtual environment;
applying the plurality of virtual objects to a virtual environment updating module that employs an evolutionary algorithm to evolve the initial population by replacing one or more of the plurality of allocated general virtual objects with one or more corresponding replacement virtual objects based at least in part on additional information associated with the session; and
automatically initiating a rendering of the virtual environment using the one or more corresponding replacement virtual objects.
Fear (Fig. 1) discloses a method comprising:
a virtual environment (a game “environment” discussed in [0036], with “virtual reality display for playing the game” further discussed in [0043]) that is generated by performing the followings steps:
obtaining, from one or more users, session information (called “Game state information” in [0018]) characterizing one or more requirements of a session of a virtual environment (eg. including “digitally stored information about game settings, player or avatar customizations, progress and achievements in the game, state information…” as discussed in [0018]);
extracting the one or more requirements of the virtual environment from the session information (eg. a “default state of a map or objects in the map of a video game” discussed in [0020]);
generating an initial population comprising a plurality of general virtual objects for placement in the virtual environment that satisfy the one or more requirements of the session of the virtual environment (eg. corresponding to objects such as a “rock 148, the building 146 (and/or the other buildings), trees, etc.” as discussed in [0036]);
applying the plurality of virtual objects to a virtual environment updating module that employs an evolutionary algorithm to evolve the initial population (eg. to change the “default state of a map or objects”) by replacing one or more of the plurality of allocated general virtual objects with one or more corresponding replacement virtual objects based at least in part on additional information associated with the session (eg. replacing the default objects based on the changes, with “changes to the world state, such as destroyed or damaged building or objects, acquired items” discussed in [0020]); and
automatically initiating a rendering of the virtual environment (130) using the one or more corresponding replacement virtual objects (as seen in Fig. 1, virtual objects are updated, eg. “the location of… player 152 (indicated by a corresponding character in FIG. 1) as the players (e.g., characters controlled thereby) move about the environment” as discussed in [0036]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify to modify Samec so the method includes obtaining session information, extracting requirements, generating an initial population of general virtual objects, applying changes to the virtual objects, and rendering the virtual environment as taught by Fear because Samec and Fear are each directed towards virtual reality games (eg. a “video game” discussed in [0558] of Samec) and this allows a user to “begin playing the local instance of the video game at the same point in the game where the user left off” previously (see [0021]).
Regarding claim 11, Samec and Fear disclose a method as discussed above, and Fear further discloses the method comprising enriching the one or more requirements of the virtual environment (eg. enriching a player’s avatar by allowing it to be customized) based at least in part on one or more properties of the session information (“game state information may include customizations to a player avatar or character” discussed in [0019]).
It would have been obvious to one of ordinary skill in the art to combine Samec and Fear for the same reasons as discussed above.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Samec and Fear as applied to claim 10 above, and further in view of Kaehler et al. (US 2021/0133506).
Regarding claim 12, Samec and Fear disclose a method as discussed above, however fail to teach or suggest wherein the virtual environment updating module employs a genetic algorithm to evolve the initial population.
Kaehler discloses a method wherein a virtual environment (“virtual reality “VR” scenario” discussed in [0079]) updating module (“generate a virtual content, virtual image information, or a modified version thereof… and cause the virtual content to be provided to a wearer of the ARD via its display” discussed in [0078]) employs a genetic algorithm (“genetic algorithm” discussed in [0063]) to evolve the initial population (eg. to provide the “modified version” of the virtual content, as discussed above, “using the NN” as discussed in [0078], “a machine learning model (e.g., a neural network)” discussed in [0002], and the machine learning model includes the “genetic algorithm” as discussed above).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Samec and Fear to employ a genetic algorithm to evolve the initial population as taught by Kaehler because this allows the model to “predict the proper outputs for inputs (e.g., novel inputs that may not be present in the original training set)” (see [0026]).
Response to Arguments
Applicant’s arguments, see pages 10 and 11, filed 4-28-25, with respect to the rejections of claims 1, 13, and 14 under Samec, Gates, and Liu have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new grounds of rejection is made in view of Samec.
Regarding claim 1, 13, and 14, the applicant argues on pages 10 and 11, that Samec, Gates, and Liu fail to teach or suggest virtual objects that are “suppressed or delayed” and the virtual objects in Liu are not “adaptations” of existing objects.
However as discussed above in the modified rejection relying solely on Samec, Samec teaches adapting existing virtual objects (a “first image” is presented to the user, discussed in [0530], and so the first object exists at one point in time) which includes one of the virtual objects in the virtual environment is suppressed (“and then a second image also presented to the user's eyes,” wherein “the second image may mask the first image” as discussed in [0530], and so at a second later point in time, the first object, corresponding to the “first image” is hidden, or supressed) based on a user sentiment (eg. based on the user having a “negative emotional reaction” to the object as discussed in [0530]).
Samec also teaches that the masking adaptation is in response to when “the display system determines that the user sees an object” (see [0530]), which the examiner interprets as reading on the claimed “impacting an area of focus… of the at least one user.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN M BLANCHA whose telephone number is (571)270-5890. The examiner can normally be reached Monday to Friday, 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached at 5712727772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN M BLANCHA/Primary Examiner, Art Unit 2623