DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
Claims 1, 9-11, and 17 have been amended; claims 5-7 and 14-15 have been canceled, and claims 21-23 have been added in the response filed November 26, 2025.
Claims 1-4, 8-13, and 16-23 are pending.
Claims 1-4, 8-13, and 16-23 are rejected.
Detailed rejections begin on page 3.
Response to Arguments begins on page 52.
Claim Objections
Claim 17 is objected to because of the following informalities:
lines 20-21: “a second item of content comprising one of the recommendations form the list” should read “a second item of content comprising one of the recommendations from the list”
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 8-10, and 21-23 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The steps for determining eligibility under 35 U.S.C. 101 can be found in the MPEP § 2106.03-2106.05.
Under Step 1, the claims are directed to statutory categories. Specifically, the method, as claimed in claims 1-4, 8-10, and 21-23, is directed to a process.
While the claims fall within statutory categories, under Step 2A, Prong 1, the claimed invention recites the abstract idea of making a recommendation to a user. Specifically, claim 1 recites the abstract idea of:
presenting a stimulus to a user, wherein the stimulus comprises at least a portion of a first item of content stored;
detecting at least one non-verbal reaction of the user to the stimulus;
processing the detected at least one non-verbal reaction to determine a response of the user to the stimulus, wherein the response includes determining whether the user has reacted positively or negatively to the stimulus;
identifying ones of the content items stored based on the determined response of the user to the stimulus, wherein the selecting is performed based on data associated with the first item of content and data associated the plurality of content items stored;
displaying a list of recommendations, wherein the list of recommendations comprises the identified ones of the plurality of content items stored; and
prompting the user to select a second item of content comprising one of the recommendations from the list.
Under Step 2A, Prong 1, it is necessary to evaluate whether the claim recites a judicial exception by referring to subject matter groupings articulated in the guidance. When considering MPEP §2106.04(a), the claims recite an abstract idea. For example, claim 1 recites the abstract idea of making recommendations, as noted above. This concept is considered to be a certain method of organizing human activity. Certain methods of organizing human activity are defined in the MPEP as including “fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).” MPEP §2106.04(a)(2) subsection II. In this case, the abstract idea recited in claim 1 is a certain method of organizing human activity because displaying a list of recommendations, wherein the list of recommendations comprises the identified ones of the plurality of content items stored and prompting the user to select a second item of content comprising one of the recommendations from the list are marketing and sales activities. Thus, claim 1 recites an abstract idea.
The recited limitations of claim 1 also recite an abstract idea because they are considered to be mental processes. As described in the MPEP, mental processes are “concepts performed in the human mind (including an observation, evaluation, judgment, opinion)”. MPEP §2106.04(a)(2) subsection III. In this case, presenting a stimulus to a user, wherein the stimulus comprises at least a portion of a first item of content stored; detecting at least one non-verbal reaction of the user to the stimulus; processing the detected at least one non-verbal reaction to determine a response of the user to the stimulus, wherein the response includes determining whether the user has reacted positively or negatively to the stimulus; identifying ones of the content items stored based on the determined response of the user to the stimulus, wherein the selecting is performed based on data associated with the first item of content and data associated the plurality of content items stored; displaying a list of recommendations, wherein the list of recommendations comprises the identified ones of the plurality of content items stored; and prompting the user to select a second item of content comprising one of the recommendations from the list are types of judgement. Thus, claim 1 recites an abstract idea.
Under Step 2A, Prong 2, if it is determined that the claims recite a judicial exception, it is then necessary to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of that exception. See MPEP §2106.04(d). In this case, claim 1 includes additional elements such as a media system comprising a media device associated with a display device; multimedia content streamed from a content database storing a plurality of multimedia content items; a sensor of the media system; a processor of the media system; metadata associated with the first item of media content and metadata associated the plurality of multimedia content items stored in the content database; and a second item of multimedia content.
Although reciting additional elements, the additional elements do not integrate the abstract idea into a practical application because they merely amount to no more than an instruction to apply the abstract idea using a generic computer or merely use a computer as a tool to perform the abstract idea. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. Similar to the limitations of Alice, claim 1 merely recites a commonplace business method (i.e., making a recommendation) being applied on a general purpose computer. See MPEP §§2106.04(d) and 2106.05(f). Thus, the claimed additional elements are merely generic elements and the implementation of the elements merely amounts to no more than an instruction to apply the abstract idea using a generic computer. Since the additional elements merely include instructions to implement the abstract idea on a generic computer or merely use a generic computer as a tool to perform an abstract idea, the abstract idea has not been integrated into a practical application. As such, claim 1 is directed to an abstract idea.
Under Step 2B, if it is determined that the claims recite a judicial exception that is not integrated into a practical application of that exception, it is then necessary to evaluate the additional elements individually and in combination to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). See MPEP §2106.05.
In this case, as noted above, the additional elements recited in independent claim 1 are recited and described in a generic manner merely amount to no more than an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Even when considered as an ordered combination, the additional elements of representative claim 1 do not add anything that is not already present when they considered individually. In Alice, the court considered the additional elements “as an ordered combination,” and determined that “the computer components ... ‘ad[d] nothing ... that is not already present when the steps are considered separately’ and simply recite intermediated settlement as performed by a generic computer.” Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014). (citing Mayo, 566 U.S. at 79, 101 USPQ2d at 1972). Also see MPEP §2106.05(f). Similarly, when viewed as a whole, claim 1 simply conveys the abstract idea itself facilitated by generic computing components. Therefore, under Step 2B, there are no meaningful limitations in claim 1 that transforms the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself.
As such, claim 1 is ineligible.
Dependent claims 2-4, 8-10, and 21-23 do not aid in the eligibility of independent claim 1. For example, claims 9-10 and 23 merely further define the abstract limitations of claim 1. Also, claims 2-4, 8, and 21-22 merely provide further embellishments of the abstract limitations recited in independent claim 1.
Additionally, it is noted that claims 2-3, 9-10, and 23 do not include further additional elements. Therefore, the claims do not integrate the abstract idea into a practical application because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea. The claims also do not amount to significantly more than the abstract idea because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Furthermore, it is noted that claim 4 includes further additional elements of wherein the subvocalization signals are detected using an electrode positioned on a jaw of the user to detect neuromuscular signals; claim 8 includes further additional elements of wherein the detecting is performed using a sensor incorporated into a device worn by the user; claim 21 includes further additional elements of wherein modifying the presentation comprises muting audio of the second item of multimedia content; and claim 22 includes further additional elements of wherein modifying the presentation comprises blurring video of the second item of multimedia content. However, these additional elements do not integrate the abstract idea into a practical application because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea. These additional elements are merely generic elements and are likewise described in a generic manner in Applicant’s specification. Additionally, the additional elements do not amount to significantly more because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Thus, dependent claims 2-4, 8-10, and 21-23 are also ineligible.
Claims 11-13 and 16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The steps for determining eligibility under 35 U.S.C. 101 can be found in the MPEP § 2106.03-2106.05.
Under Step 1, the claims are directed to statutory categories. Specifically, the multimedia system, as claimed in claims 11-13 and 16, is directed to a machine.
While the claims fall within statutory categories, under Step 2A, Prong 1, the claimed invention recites the abstract idea of making a recommendation to a user. Specifically, claim 11 recites the abstract idea of:
items of content, wherein each of the items of content has metadata associated therewith;
at least one display for displaying selected ones of the items of content to a group of users;
detecting non-verbal reactions of each user of the group of users to a stimulus presented to the group of users on the at least one display, wherein the stimulus comprises a portion of a first one of the items of content;
wherein:
process the detected non-verbal reactions of the group of users are processed on a per- user basis to determine responses of the group of users to the stimulus;
identify a plurality of the content items stored based on the determined responses of the group of users to the stimulus, wherein the selecting is performed based on data associated with the first one of the items of content and data associated with the identified plurality of content items stored.
Under Step 2A, Prong 1, it is necessary to evaluate whether the claim recites a judicial exception by referring to subject matter groupings articulated in the guidance. When considering MPEP §2106.04(a), the claims recite an abstract idea. For example, claim 11 recites the abstract idea of making recommendations, as noted above. This concept is considered to be a certain method of organizing human activity. Certain methods of organizing human activity are defined in the MPEP as including “fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).” MPEP §2106.04(a)(2) subsection II. In this case, the abstract idea recited in claim 11 is a certain method of organizing human activity because identify a plurality of the content items stored based on the determined responses of the group of users to the stimulus, wherein the selecting is performed based on data associated with the first one of the items of content and data associated with the identified plurality of content items stored is a marketing and sales activity. Thus, claim 11 recites an abstract idea.
The recited limitations of claim 11 also recite an abstract idea because they are considered to be mental processes. As described in the MPEP, mental processes are “concepts performed in the human mind (including an observation, evaluation, judgment, opinion)”. MPEP §2106.04(a)(2) subsection III. In this case, displaying selected ones of the items of content to a group of users; detecting non-verbal reactions of each user of the group of users to a stimulus presented to the group of users on the at least one display, wherein the stimulus comprises a portion of a first one of the items of content; process the detected non-verbal reactions of the group of users are processed on a per- user basis to determine responses of the group of users to the stimulus; and identify a plurality of the content items stored based on the determined responses of the group of users to the stimulus, wherein the selecting is performed based on data associated with the first one of the items of content and data associated with the identified plurality of content items stored are types of judgement. Thus, claim 11 recites an abstract idea.
Under Step 2A, Prong 2, if it is determined that the claims recite a judicial exception, it is then necessary to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of that exception. See MPEP §2106.04(d). In this case, claim 11 includes additional elements such as a multimedia system comprising: a processor; a memory device; a database; at least one sensing device; a plurality of the content items stored in the database; and metadata associated with the first one of the items of content and metadata associated with the identified plurality of content items stored in the database.
Although reciting additional elements, the additional elements do not integrate the abstract idea into a practical application because they merely amount to no more than an instruction to apply the abstract idea using a generic computer or merely use a computer as a tool to perform the abstract idea. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. Similar to the limitations of Alice, claim 11 merely recites a commonplace business method (i.e., making a recommendation) being applied on a general purpose computer. See MPEP §§2106.04(d) and 2106.05(f). Thus, the claimed additional elements are merely generic elements and the implementation of the elements merely amounts to no more than an instruction to apply the abstract idea using a generic computer. Since the additional elements merely include instructions to implement the abstract idea on a generic computer or merely use a generic computer as a tool to perform an abstract idea, the abstract idea has not been integrated into a practical application. As such, claim 11 is directed to an abstract idea.
Under Step 2B, if it is determined that the claims recite a judicial exception that is not integrated into a practical application of that exception, it is then necessary to evaluate the additional elements individually and in combination to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). See MPEP §2106.05.
Here, as noted above, the additional elements recited in independent claim 11 are recited and described in a generic manner merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Even when considered as an ordered combination, the additional elements of claim 11 do not add anything that is not already present when they considered individually. In Alice, the court considered the additional elements “as an ordered combination,” and determined that “the computer components ... ‘ad[d] nothing ... that is not already present when the steps are considered separately’ and simply recite intermediated settlement as performed by a generic computer.” Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014). (citing Mayo, 566 U.S. at 79, 101 USPQ2d at 1972). Also see MPEP §2106.05(f). Similarly, when viewed as a whole, claim 11 simply conveys the abstract idea itself facilitated by generic computing components. Therefore, under Step 2B, there are no meaningful limitations in claim 11 that transforms the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself.
As such, claim 11 is ineligible.
Dependent claims 12-13 and 16 do not aid in the eligibility of independent claim 1. For example, claims 12-13 and 16 merely provide further embellishments of the abstract limitations recited in independent claim 11.
Additionally, it is noted that claim 12 includes further additional elements of wherein the detected non-verbal reaction comprises subvocalization signals and the sensing device comprises at least one electrode positioned on a jaw of the user to detect neuromuscular signals; claim 13 includes further additional elements of wherein the display comprises a television display; and claim 16 includes further additional elements of wherein the at least one sensing device comprises a plurality of sensing devices and wherein each one of the plurality of sensing devices is incorporated into a device worn by a user of the group of users. However, these additional elements do not integrate the abstract idea into a practical application because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea. These additional elements are merely generic elements and are likewise described in a generic manner in Applicant’s specification. Additionally, the additional elements do not amount to significantly more because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Thus, dependent claims 12-13 and 16 are also ineligible.
Claims 17-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception.
The steps for determining eligibility under 35 U.S.C. 101 can be found in the MPEP § 2106.03-2106.05.
Under Step 1, the claims are directed to statutory categories. Specifically, the one or more non-transitory computer-readable media, as claimed in claims 17-20, is directed to an article of manufacture.
While the claims fall within statutory categories, under Step 2A, Prong 1, the claimed invention recites the abstract idea of making a recommendation to a user. Specifically, claim 17 recites the abstract idea of:
presenting a stimulus to a user, wherein the stimulus comprises a portion of a first item of content,
storing a plurality of content items;
detecting at least one non-verbal reaction of the user to the stimulus;
processing the detected at least one non-verbal reaction to determine a response of the user to the stimulus;
identifying ones of the content items stored based on the determined response of the user to the stimulus, wherein the selecting is performed based on data associated with the first item of content and data associated the plurality of content items stored;
displaying a list of recommendations, wherein the list of recommendations comprises the identified ones of the plurality of content items stored; and
prompting the user to select a second item of content comprising one of the recommendations form the list;
wherein the detected non-verbal reaction comprises at least one of a facial expression, kinesics, paralinguistics, body language, posture, gaze, or a physiological response of the user.
Under Step 2A, Prong 1, it is necessary to evaluate whether the claim recites a judicial exception by referring to subject matter groupings articulated in the guidance. When considering MPEP §2106.04(a), the claims recite an abstract idea. For example, claim 17 recites the abstract idea of making recommendations, as noted above. This concept is considered to be a certain method of organizing human activity. Certain methods of organizing human activity are defined in the MPEP as including “fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).” MPEP §2106.04(a)(2) subsection II. In this case, the abstract idea recited in claim 17 is a certain method of organizing human activity because displaying a list of recommendations, wherein the list of recommendations comprises the identified ones of the plurality of content items stored and prompting the user to select a second item of content comprising one of the recommendations form the list are marketing and sales activities. Thus, claim 17 recites an abstract idea.
The recited limitations of claim 17 also recite an abstract idea because they are considered to be mental processes. As described in the MPEP, mental processes are “concepts performed in the human mind (including an observation, evaluation, judgment, opinion)”. MPEP §2106.04(a)(2) subsection III. In this case, presenting a stimulus to a user, wherein the stimulus comprises a portion of a first item of content; storing a plurality of content items; detecting at least one non-verbal reaction of the user to the stimulus; processing the detected at least one non-verbal reaction to determine a response of the user to the stimulus; identifying ones of the content items stored based on the determined response of the user to the stimulus, wherein the selecting is performed based on data associated with the first item of content and data associated the plurality of content items stored; displaying a list of recommendations, wherein the list of recommendations comprises the identified ones of the plurality of content items stored; and prompting the user to select a second item of content comprising one of the recommendations form the list; wherein the detected non-verbal reaction comprises at least one of a facial expression, kinesics, paralinguistics, body language, posture, gaze, or a physiological response of the user are types of judgement. Thus, claim 17 recites an abstract idea.
Under Step 2A, Prong 2, if it is determined that the claims recite a judicial exception, it is then necessary to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of that exception. See MPEP §2106.04(d). In this case, claim 17 includes additional elements such as one or more non-transitory computer-readable storage media comprising instruction for execution which, when executed by a processor, result in operations, a display device associated with a media system; wherein the stimulus is streamed from a content database of the media system; at least one sensor of the media system; a processor of the media system; metadata associated with the first item of content and metadata associated the plurality of content items stored in the content database.
Although reciting additional elements, the additional elements do not integrate the abstract idea into a practical application because they merely amount to no more than an instruction to apply the abstract idea using a generic computer or merely use a computer as a tool to perform the abstract idea. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. Similar to the limitations of Alice, claim 17 merely recites a commonplace business method (i.e., making a recommendation) being applied on a general purpose computer. See MPEP §§2106.04(d) and 2106.05(f). Thus, the claimed additional elements are merely generic elements and the implementation of the elements merely amounts to no more than an instruction to apply the abstract idea using a generic computer. Since the additional elements merely include instructions to implement the abstract idea on a generic computer or merely use a generic computer as a tool to perform an abstract idea, the abstract idea has not been integrated into a practical application. As such, claim 17 is directed to an abstract idea.
Under Step 2B, if it is determined that the claims recite a judicial exception that is not integrated into a practical application of that exception, it is then necessary to evaluate the additional elements individually and in combination to determine whether they provide an inventive concept (i.e., whether the additional elements amount to significantly more than the exception itself). See MPEP §2106.05.
Here, as noted above, the additional elements recited in claim 17 are recited and described in a generic manner merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Even when considered as an ordered combination, the additional elements of claim 17 do not add anything that is not already present when they considered individually. In Alice, the court considered the additional elements “as an ordered combination,” and determined that “the computer components ... ‘ad[d] nothing ... that is not already present when the steps are considered separately’ and simply recite intermediated settlement as performed by a generic computer.” Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 224, 110 USPQ2d 1976, 1983-84 (2014). (citing Mayo, 566 U.S. at 79, 101 USPQ2d at 1972). Also see MPEP §2106.05(f). Similarly, when viewed as a whole, claim 17 simply conveys the abstract idea itself facilitated by generic computing components. Therefore, under Step 2B, there are no meaningful limitations in claim 17 that transforms the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself.
As such, claim 17 is ineligible.
Dependent claims 18-20 do not aid in the eligibility of independent claim 17. For example, claims 18-20 merely further define the abstract limitations of claim 17.
Additionally, it is noted that claims 18-20 do not include further additional elements. Therefore, the claims do not integrate the abstract idea into a practical application because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea. The claims also do not amount to significantly more than the abstract idea because they merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea.
Thus, dependent claims 18-20 are also ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3, 5-11, and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over Boissière (US 11107282 B1, herein referred to as Boissière), in view of Singh et. al. (US 20250005868 A1, herein referred to as Singh).
Claim 1:
Boissière discloses:
A method comprising {Boissière: [Col. 14, ln. 51-55] method 600 for obtaining biometric characteristics of a user and using the biometric characteristics to determine a criterion used to suggest virtual reality content to a user in a virtual reality content store.}:
presenting via a media system comprising a media device associated with a display device a stimulus to a user, wherein the stimulus comprises a portion of a first item of multimedia content from a plurality of multimedia content items {Boissière: fig 1B, system 100 has device 100b connected to device 100c that has display 120; [Col. 15, ln. 42-49] user device receives the request to access the virtual reality content store to access downloadable virtual reality content while the user is viewing a virtual reality experience. For example, while the user is immersed in a virtual reality game, watching video content, or using an application (i.e., stimulus comprising a portion of multimedia content), the exemplary user device receives user input to access the virtual reality content store from within the application; [Col. 15, ln. 19-26] exemplary user device measures (upon receiving user permission) biometric characteristics of the user while the user is viewing virtual reality content. For example, the user may be playing a game that requires physical activity. The exemplary user device measures the user's heart rate during the game before a request to access new virtual reality content for the game is received by the device};
detecting using a sensor of the media system at least one non-verbal reaction of the user to the stimulus {Boissière: fig 1B, system 100 is connected to sensors 108, 110, 116, 122, and 124; [Col. 12, ln. 24-29] user device 204 receives user input 304 to access the virtual reality store while displaying virtual reality content. In some embodiments, exemplary user device 204 is configured to detect user input through input sensors, motion sensors, or finger tracking sensors. The exemplary user input includes a gesture input, or a gaze or eye movement input.};
processing by a processor of the media system the detected at least one non-verbal reaction to determine a response of the user to the stimulus, wherein the response includes determining whether the user has reacted to the stimulus {Boissière: fig 1B, system 100 has processor(s) 102; [Col. 12, ln. 42-48] exemplary user device determines a criterion based on the comfort level associated with the user's profile. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected while the user is viewing virtual reality content; [Col. 11, ln. 44-51] comfort level of the user is determined based on the detected biometric characteristics. The comfort level is a measure of an activity level or the activeness of the user. The comfort level is based on the physical movement of the user and is a measure of how tired the user is after viewing virtual reality content.};
identifying ones of the multimedia content items based on the determined response of the user to the stimulus, wherein the selecting is performed based on data associated with the first item of media content and data associated the plurality of multimedia content items {Boissière: [Col. 12, ln. 42-48] criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user; [Col. 12, ln. 40-52] In response to receiving a request to display the virtual reality content store, the exemplary user device determines a criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity (i.e., data associated with the plurality of multimedia content items) for the user. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected (i.e., data associated with the first item of media content) while the user is viewing virtual reality content.};
displaying on the display device a list of recommendations based on the determined response of the user to the stimulus, wherein the list of recommendations comprises the identified ones of the plurality of multimedia content items {Boissière: fig 1B, display 120; fig 5A; [Col. 15, ln. 42-49] user device receives the request to access the virtual reality content store to access downloadable virtual reality content while the user is viewing a virtual reality experience; [Col. 12, ln. 42-48] criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user}; and
prompting the user to select a second item of multimedia content comprising one of the recommendations from the list {Boissière: fig 5A; [Col. 13, ln. 28-37] the virtual reality content store includes virtual reality content, such as games, that are accessed or purchased. The user device displays, as part of virtual reality content store user interface 502, an affordance of a user profile or user account 532A and one or more affordances of downloadable virtual reality content; [Col. 12, ln. 40-52] In response to receiving a request to display the virtual reality content store, the exemplary user device determines a criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected while the user is viewing virtual reality content. The exemplary user device provides the determined criterion to the virtual reality content store to obtain relevant virtual reality content having an associated score that satisfies the criterion to display in the virtual reality content store.}.
Although disclosing a virtual reality system that allows users to access a virtual reality content store that recommends content based on the user’s physiological responses, Boissière does not disclose:
a first item of multimedia content streamed from a content database storing a plurality of multimedia content items;
a response of the user to the stimulus, wherein the response includes determining whether the user has reacted positively or negatively to the stimulus;
selecting multimedia content items stored in the content database, wherein the selecting is performed based on metadata associated with the first item of media content and metadata associated the plurality of multimedia content items stored in the content database; and
wherein the list of recommendations comprises the identified ones of the plurality of multimedia content items stored in the content database.
Boissière does disclose that the system determines how the user reacts to the stimulus by measuring biometrics (Boissière: [Col. 12, ln. 42-48]; [Col. 11, ln. 44-51]).
However, Singh teaches:
a first item of multimedia content streamed from a content database storing a plurality of multimedia content items {Singh: [0086] transmit and/or receive (for instance to and/or from content database n-206) content item; [0129] in the case of virtual goods, such as digital media assets, game levels, or virtual reality experiences, the purchased items can be directly downloaded or streamed to the user’s XR device 110};
a response of the user to the stimulus, wherein the response includes determining whether the user has reacted positively or negatively to the stimulus {Singh: [0122] user expresses interest in the recommended product through a gesture 90, in this case, a thumbs-up gesture. The gesture recognition system of the XR devices may be trained to recognize a variety of hand gestures or other bodily movements, allowing for intuitive and natural user interaction within the XR environment; [0124] system may be configured to identify a wide spectrum of user gestures, both multi-gesture and multi-modal inputs. For examples, a ‘swipe left’ or ‘swipe right’ gesture could indicate rejection or approval of a product as an example of a multi-gesture input.};
selecting multimedia content items stored in the content database, wherein the selecting is performed based on metadata associated with the first item of media content and metadata associated the plurality of multimedia content items stored in the content database {Singh: [0112] control circuitry identifies a second object 20, that possesses one or more properties 12, exhibiting a stronger correlation with the current attribute of interest 75, of the user. This involves comparing the properties of various objects within the XR environment 1, and selecting the one that best aligns with the user's identified interest; [0086] transmit and/or receive (for instance to and/or from content database n-206) content item; [0077] Storage devices may store various types of content, metadata, and or other types of data for use in personalized product recommendation generation in the XR environment}; and
wherein the list of recommendations comprises the identified ones of the plurality of multimedia content items stored in the content database {Singh: [0104] system might provide a list of multiple recommended items for purchase; [0129] in the case of virtual goods, such as digital media assets, game levels, or virtual reality experiences, the purchased items can be directly downloaded or streamed to the user’s XR device 110; [0086] transmit and/or receive (for instance to and/or from content database n-206) content item}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included a content database as taught by Singh in the comfort measurement method of Boissière in order to have an organized manner that facilitates efficient retrieval and analysis of product information (Singh: [0056]).
Claim 2:
Boissière and Singh teach the method of claim 1. Boissière further discloses:
wherein the detected non-verbal reaction comprises at least one of a facial expression, kinesics, paralinguistics, body language, posture, gaze, or a physiological response of the user {Boissière: [Col. 12, ln. 24-29] exemplary user device 204 receives user input 304 to access the virtual reality store while displaying virtual reality content. In some embodiments, exemplary user device 204 is configured to detect user input through input sensors, motion sensors, or finger tracking sensors. The exemplary user input includes a gesture input, or a gaze or eye movement input.}.
Claim 3:
Boissière and Singh teach the method of claim 1. Boissière further discloses:
wherein the detected non-verbal reaction comprises subvocalization signals {Boissière: [Col. 12, ln. 1-2] user device 204 monitors the user's facial expressions}.
Claim 8:
Boissière and Singh teach the method of claim 1. Boissière further discloses:
wherein the detecting is performed using a sensor incorporated into a device worn by the user {Boissière: [Col. 11, ln. 41-59] exemplary user device 204 is configured to detect biometric characteristics based on user 202's physiological state while user 202 is viewing virtual reality content 210 through exemplary user device 204. These biometric characteristics are measured by exemplary user device 204 or paired user device 206 (e.g., a watch, motion sensor, sweat monitor, heart rate monitor, and other input devices paired to exemplary user device 204) and used to determine the comfort level of the user.}.
Claim 9:
Boissière and Singh teach the method of claim 1. Boissière further discloses:
presenting via the display device the second item of multimedia content {Boissière: fig 5A, “Suggested for you”; [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game; [Col. 18, ln. 20] content can be selected and delivered to users};
detecting using the sensor at least one non-verbal reaction of the user to the second item of multimedia content {Boissière: fig 1B, device 100c has sensors 108, 110, 116, 122, 124; [Col. 18, ln. 20] content can be selected and delivered to users; [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game. In such an example, the exemplary user device detects the user's heartrate within a predetermined period of time after receiving the request to purchase the additional virtual reality object};
processing the detected at least one non-verbal reaction of the user to the second item of multimedia content to determine a response of the user to the second item of multimedia content {Boissière: [Col. 14, ln. 18-24] user device determines the criterion while the displaying virtual reality content and stores the criterion in the user's profile. In some embodiments, the user device determines the criterion based on user biometric characteristics measured after receiving the request to access the virtual reality content store.}; and
updating the list of recommendations based on the determined response of the user to the second item of multimedia content {Boissière: [Col. 12, ln. 60-61] virtual reality content score can be based on a plurality of metrics; [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game. In such an example, the exemplary user device detects the user's heartrate within a predetermined period of time after receiving the request to purchase the additional virtual reality object; [Col. 13, ln. 41-46] exemplary user device is configured to display downloadable virtual reality content in the virtual content store user interface 502 according to a criterion based on biometric characteristics or a comfort level associated with user profile}.
Claim 10:
Boissière and Singh teach the method of claim 1. Boissière further discloses:
presenting the second-item of multimedia content on the display {Boissière: fig 5A, “Suggested for you”; [Col. 13, ln. 28-37] user device displays, as part of virtual reality content store user interface 502, an affordance of a user profile or user account 532A and one or more affordances of downloadable virtual reality content};
monitoring non-verbal reactions of the user to the second-item of multimedia content {Boissière: [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game. In such an example, the exemplary user device detects the user's heartrate within a predetermined period of time after receiving the request to purchase the additional virtual reality object.};
processing the non-verbal reactions of the user to the second-item of multimedia content to determine a response of the user to the second-item of multimedia content {Boissière: [Col. 14, ln. 18-24] user device determines the criterion while the displaying virtual reality content and stores the criterion in the user's profile. In some embodiments, the user device determines the criterion based on user biometric characteristics measured after receiving the request to access the virtual reality content store.}.
Although disclosing that the user’s reactions can be monitored after displaying the recommended items, Boissière does not disclose:
modifying presentation of the second item of multimedia content based on the determined response of the user to the second item of multimedia content.
However, Singh teaches:
modifying presentation of the second item of multimedia content based on the determined response of the user to the second item of multimedia content {Singh: [0092] The system 200 may update the personalized product recommendations in near- or real-time based on the user's gaze data, feedback, and other contextual factors.}
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included modifying the recommended content in real-time as taught by Singh in the comfort measurement method of Boissière in order to ensure that the recommendations remain relevant and engaging for the user (Singh: [0092]).
Claim 11:
Boissière discloses:
A multimedia system comprising {Boissière: figs 1A-1B; [Col. 7, ln. 1-7] system 100 includes device 100a. Device 100a includes various components, such as processor(s) 102, RF circuitry(ies) 104, memory(ies) 106, image sensor(s) 108, orientation sensor(s) 110, microphone(s) 112, location sensor(s) 116, speaker(s) 118, display(s) 120, touch-sensitive surface(s) 122, and biometric sensor(s) 124}:
a processor {Boissière: figs 1A-1B, processor(s) 102};
a memory device {Boissière: figs 1A-1B, memory(ies) 106};
at least one display for displaying selected ones of the items of content to users {Boissière: figs 1A-1I, display(s) 120; figs 5A-5B virtual reality content store interface 502; [Col. 13, ln. 51-53] user device displays downloadable virtual reality content in a virtual reality store user interface 502; [Col. 4, ln. 10-13] because a first user has different biometric characteristics than a second user, a first user sees different downloadable virtual reality content than a second user sees in the virtual reality content store.}; and
at least one sensing device for detecting non-verbal reactions of each user to a stimulus presented on the at least one display, wherein the stimulus comprises a portion of a first one of the items of content {Boissière: fig 1B, device 100c has sensors 108, 110, 116, 122, 124; [Col. 12, ln. 24-29] exemplary user device 204 receives user input 304 to access the virtual reality store while displaying virtual reality content. In some embodiments, exemplary user device 204 is configured to detect user input through input sensors, motion sensors, or finger tracking sensors. The exemplary user input includes a gesture input, or a gaze or eye movement input; [Col. 4, ln. 10-13] because a first user has different biometric characteristics than a second user, a first user sees different downloadable virtual reality content than a second user sees in the virtual reality content store};
wherein multimedia system is configured to {Boissière: fig 1B, system 100}:
process the detected non-verbal reactions of the users on a per- user basis to determine responses of users to the stimulus {Boissière: [Col. 12, ln. 42-48] exemplary user device determines a criterion based on the comfort level associated with the user's profile. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected while the user is viewing virtual reality content; [Col. 4, ln. 10-13] because a first user has different biometric characteristics than a second user, a first user sees different downloadable virtual reality content than a second user sees in the virtual reality content store};
identify a plurality of the content items based on the determined responses of the users to the stimulus, wherein the selecting is performed based on data associated with the first one of the items of content and data associated with the identified plurality of content items {Boissière: [Col. 12, ln. 42-48] criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user; [Col. 12, ln. 40-52] In response to receiving a request to display the virtual reality content store, the exemplary user device determines a criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity (i.e., data associated with the plurality of multimedia content items) for the user. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected (i.e., data associated with the first item of media content) while the user is viewing virtual reality content.};
wherein the users are provided with a list of recommendations based on the determined responses of the users {Boissière: figs 5A-5B, Suggested for you 504; [Col. 15, ln. 42-49] user device receives the request to access the virtual reality content store to access downloadable virtual reality content while the user is viewing a virtual reality experience; [Col. 12, ln. 42-48] criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user; [Col. 4, ln. 10-13] because a first user has different biometric characteristics than a second user, a first user sees different downloadable virtual reality content than a second user sees in the virtual reality content store}.
Although disclosing a virtual reality system that allows users to access a virtual reality content store that recommends content based on the user’s physiological responses, Boissière does not disclose:
a database comprising items of content, wherein each of the items of content has metadata associated therewith;
displaying selected ones of the items of content to a group of users;
detecting reactions of each user of the group of users to a stimulus presented to the group of users;
process the detected reactions of the group of users on a per- user basis to determine responses of the group of users to the stimulus;
a plurality of the content items stored in the database based on the determined responses of the group of users to the stimulus, wherein the selecting is performed based on metadata associated with the first one of the items of content and metadata associated with the identified plurality of content items stored in the database;
wherein the users are provided with a list of recommendations based on the determined responses of the group of users.
Boissière does disclose that multiple users can use the platform to access virtual reality content (Boissière: [Col. 4, ln. 10-13]), and the feedback can be crowd sourced (Boissière: [Col. 12, ln. 64-67]).
However, Singh teaches:
a database comprising items of content, wherein each of the items of content has metadata associated therewith {Singh: [0112] control circuitry identifies a second object 20, that possesses one or more properties 12, exhibiting a stronger correlation with the current attribute of interest 75, of the user. This involves comparing the properties of various objects within the XR environment 1, and selecting the one that best aligns with the user's identified interest; [0086] transmit and/or receive (for instance to and/or from content database n-206) content item; [0077] Storage devices may store various types of content, metadata, and or other types of data for use in personalized product recommendation generation in the XR environment};
displaying selected ones of the items of content to a group of users {Singh: [0063] system may be used in a virtual environment where the user is browsing for media content, such as movies, music, or video games, through a display or other user interface; [0065] system may also support collaborative shopping experiences, wherein multiple users can share information within the same XR environment};
detecting reactions of each user of the group of users to a stimulus presented to the group of users {Singh: [0063] system may be used in a virtual environment where the user is browsing for media content; [0065] system may also support collaborative shopping experiences, aggregating and analyzing the gaze patterns and preferences of multiple users};
process the detected reactions of the group of users on a per- user basis to determine responses of the group of users to the stimulus {Singh: [0063] system may analyze the user's gaze patterns as they interact with the virtual environment, identifying the user's preferences and interests; [0065] multiple users can share their gaze data, within the same XR environment. By aggregating and analyzing the gaze patterns and preferences of multiple users, the system is able to generate more comprehensive and relevant product recommendations that cater to the needs and preferences of the entire group.};
a plurality of the content items stored in the database based on the determined responses of the group of users to the stimulus, wherein the selecting is performed based on metadata associated with the first one of the items of content and metadata associated with the identified plurality of content items stored in the database {Singh: [0065] system may also support collaborative shopping experiences, wherein multiple users can share their gaze data, product recommendations, and other relevant information within the same XR environment. This could be particularly useful for situations where users are shopping together, such as when a group of friends or family members is planning a meal or event. By aggregating and analyzing the gaze patterns and preferences of multiple users, the system is able to generate more comprehensive and relevant product recommendations; [0122] user expresses interest in the recommended product through a gesture 90, in this case, a thumbs-up gesture. The gesture recognition system of the XR devices may be trained to recognize a variety of hand gestures or other bodily movements, allowing for intuitive and natural user interaction within the XR environment; [0124] system may be configured to identify a wide spectrum of user gestures, both multi-gesture and multi-modal inputs. For examples, a ‘swipe left’ or ‘swipe right’ gesture could indicate rejection or approval of a product as an example of a multi-gesture input; [0086] transmit and/or receive (for instance to and/or from content database n-206) content item; [0077] Storage devices may store various types of content, metadata, and or other types of data for use in personalized product recommendation generation in the XR environment};
wherein the users are provided with a list of recommendations based on the determined responses of the group of users {Singh: [0065] By aggregating and analyzing the gaze patterns and preferences of multiple users, the system is able to generate more comprehensive and relevant product recommendations that cater to the needs and preferences of the entire group; [0104] system might provide a list of multiple recommended items for purchase}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included collaborative use of the XR system as taught by Singh in the comfort measurement system of Boissière in order to generate more comprehensive and relevant product recommendations that cater to the needs and preferences of the entire group (Singh: [0065]).
Claim 13:
Boissière and Singh teach the system of claim 11. Boissière does not disclose:
wherein the display comprises a television display.
However, Singh teaches:
wherein the display comprises a television display {Singh: [0084] User input interface 226 may be integrated with or combined with display 224, which may be a television}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included a television display as taught by Singh in the comfort measurement system of Boissière in order to suitably display images in the XR environment (Singh: [0084]).
Claim 16:
Boissière and Singh teach the system of claim 11, including that the system can be used by a group of users. Boissière further discloses:
wherein the at least one sensing device comprises a plurality of sensing devices and wherein each one of the plurality of sensing devices is incorporated into a device worn by a user {Boissière: [Col. 11, ln. 41-59] exemplary user device 204 is configured to detect biometric characteristics based on user 202's physiological state while user 202 is viewing virtual reality content 210 through exemplary user device 204. These biometric characteristics are measured by exemplary user device 204 or paired user device 206 (e.g., a watch, motion sensor, sweat monitor, heart rate monitor, and other input devices paired to exemplary user device 204) and used to determine the comfort level of the user.}.
Claim 17:
Boissière discloses:
One or more non-transitory computer-readable storage media comprising instruction for execution which, when executed by a processor {Boissière: [Col. 2, ln. 33-36] non-transitory computer-readable storage medium comprising one or more programs configured to be executed by one or more processors of an electronic device}, result in operations comprising:
presenting on a display device associated with a media system a stimulus to a user, wherein the stimulus comprises a portion of a first item of content {Boissière: fig 1B, system 100 has device 100c with display 120; [Col. 15, ln. 42-49] user device receives the request to access the virtual reality content store to access downloadable virtual reality content while the user is viewing a virtual reality experience. For example, while the user is immersed in a virtual reality game, watching video content, or using an application, the exemplary user device receives user input to access the virtual reality content store from within the application.};
detecting by at least one sensor of the media system at least one non-verbal reaction of the user to the stimulus {Boissière: fig 1B, system 100 with device 100c has sensors 108, 110, 116, 122, 124; [Col. 12, ln. 24-29] exemplary user device 204 receives user input 304 to access the virtual reality store while displaying virtual reality content. In some embodiments, exemplary user device 204 is configured to detect user input through input sensors, motion sensors, or finger tracking sensors. The exemplary user input includes a gesture input, or a gaze or eye movement input.};
processing by a processor of the media system the detected at least one non-verbal reaction to determine a response of the user to the stimulus {Boissière: fig 1B, system 100 has processors 102; [Col. 12, ln. 42-48] exemplary user device determines a criterion based on the comfort level associated with the user's profile. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected while the user is viewing virtual reality content.};
identifying ones of the content based on the determined response of the user to the stimulus, wherein the selecting is performed based on data associated with the first item of content and data associated the plurality of content items {Boissière: [Col. 12, ln. 42-48] criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user; [Col. 12, ln. 40-52] In response to receiving a request to display the virtual reality content store, the exemplary user device determines a criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity (i.e., data associated with the plurality of multimedia content items) for the user. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected (i.e., data associated with the first item of media content) while the user is viewing virtual reality content.};
displaying on the display device a list of recommendations based on the determined response of the user to the stimulus, wherein the list of recommendations comprises the identified ones of the plurality of content items {Boissière: fig 1B, device 100c has display 120; fig 5A; [Col. 15, ln. 42-49] user device receives the request to access the virtual reality content store to access downloadable virtual reality content while the user is viewing a virtual reality experience; [Col. 12, ln. 42-48] criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user}; and
prompting the user to select a second item of content comprising one of the recommendations form the list {Boissière: fig 5A; [Col. 13, ln. 28-37] the virtual reality content store includes virtual reality content, such as games, that are accessed or purchased. The user device displays, as part of virtual reality content store user interface 502, an affordance of a user profile or user account 532A and one or more affordances of downloadable virtual reality content; [Col. 12, ln. 40-52] In response to receiving a request to display the virtual reality content store, the exemplary user device determines a criterion based on the comfort level associated with the user's profile is used to determine one or more recommended virtual reality content that is the appropriate level of activity for the user. In some embodiments, the criterion is based on the detected user biometrics, which are based on a change in one or more physiological states of the user detected while the user is viewing virtual reality content. The exemplary user device provides the determined criterion to the virtual reality content store to obtain relevant virtual reality content having an associated score that satisfies the criterion to display in the virtual reality content store.};
wherein the detected non-verbal reaction comprises at least one of a facial expression, kinesics, paralinguistics, body language, posture, gaze, or a physiological response of the user {Boissière: [Col. 12, ln. 24-29] exemplary user device 204 receives user input 304 to access the virtual reality store while displaying virtual reality content. In some embodiments, exemplary user device 204 is configured to detect user input through input sensors, motion sensors, or finger tracking sensors. The exemplary user input includes a gesture input, or a gaze or eye movement input.}.
Although disclosing a virtual reality system that allows users to access a virtual reality content store that recommends content based on the user’s physiological responses, Boissière does not disclose:
wherein the stimulus is streamed from a content database of the media system, the content database for storing a plurality of content items;
wherein the selecting is performed based on metadata associated with the first item of content and metadata associated the plurality of content items stored in the content database;
displaying ones of the plurality of content items stored in the content database.
However, Singh teaches:
wherein the stimulus is streamed from a content database of the media system, the content database for storing a plurality of content items {Singh: [0086] transmit and/or receive (for instance to and/or from content database n-206) content item; [0129] in the case of virtual goods, such as digital media assets, game levels, or virtual reality experiences, the purchased items can be directly downloaded or streamed to the user’s XR device 110};
wherein the selecting is performed based on metadata associated with the first item of content and metadata associated the plurality of content items stored in the content database {Singh: [0112] control circuitry identifies a second object 20, that possesses one or more properties 12, exhibiting a stronger correlation with the current attribute of interest 75, of the user. This involves comparing the properties of various objects within the XR environment 1, and selecting the one that best aligns with the user's identified interest; [0086] transmit and/or receive (for instance to and/or from content database n-206) content item; [0077] Storage devices may store various types of content, metadata, and or other types of data for use in personalized product recommendation generation in the XR environment};
displaying ones of the plurality of content items stored in the content database {Singh: [0129] in the case of virtual goods, such as digital media assets, game levels, or virtual reality experiences, the purchased items can be directly downloaded or streamed to the user’s XR device 110; [0086] transmit and/or receive (for instance to and/or from content database n-206) content item}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included a content database as taught by Singh in the comfort measurement computer-readable medium of Boissière in order to have an organized manner that facilitates efficient retrieval and analysis of product information (Singh: [0056]).
Claim 18:
Boissière and Singh teach the non-transitory computer-readable medium of claim 17. Boissière further discloses:
detecting at least one non-verbal reaction of the user to the at least one second item of content {Boissière: [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game. In such an example, the exemplary user device detects the user's heartrate within a predetermined period of time after receiving the request to purchase the additional virtual reality object.};
processing the detected at least one non-verbal reaction of the user to the at least one second item of content to determine a response of the user to the at least one second item of content {Boissière: [Col. 14, ln. 18-24] user device determines the criterion while the displaying virtual reality content and stores the criterion in the user's profile. In some embodiments, the user device determines the criterion based on user biometric characteristics measured after receiving the request to access the virtual reality content store.}; and
updating the list of recommendations based on the determined response of the user to the at least one second item of content {Boissière: [Col. 12, ln. 60-61] virtual reality content score can be based on a plurality of metrics; [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game. In such an example, the exemplary user device detects the user's heartrate within a predetermined period of time after receiving the request to purchase the additional virtual reality object; [Col. 13, ln. 41-46] exemplary user device is configured to display downloadable virtual reality content in the virtual content store user interface 502 according to a criterion based on biometric characteristics or a comfort level associated with user profile}.
Claim 19:
Boissière and Singh teach the non-transitory computer-readable medium of claim 17. Boissière further discloses:
presenting the selected item of content to the user {Boissière: fig 5A, “Suggested for you”; [Col. 13, ln. 28-37] user device displays, as part of virtual reality content store user interface 502, an affordance of a user profile or user account 532A and one or more affordances of downloadable virtual reality content};
monitoring non-verbal reactions of the user to the selected item of content {Boissière: [Col. 14, ln. 24-28] user device receives a request to purchase additional virtual reality content while the device is displaying a virtual reality scene in a game. In such an example, the exemplary user device detects the user's heartrate within a predetermined period of time after receiving the request to purchase the additional virtual reality object.};
processing the non-verbal reactions of the user to the selected item of content to
determine a response of the user to the selected item of content {Boissière: [Col. 14, ln. 18-24] user device determines the criterion while the displaying virtual reality content and stores the criterion in the user's profile. In some embodiments, the user device determines the criterion based on user biometric characteristics measured after receiving the request to access the virtual reality content store.}.
Although disclosing that the user’s reactions can be monitored after displaying the recommended items, Boissière does not disclose:
modifying presentation of the selected item of content based on the determined response of the user to the selected item of content.
However, Singh teaches:
modifying presentation of the selected item of content based on the determined response of the user to the selected item of content {Singh: [0092] The system 200 may update the personalized product recommendations in near- or real-time based on the user's gaze data, feedback, and other contextual factors.}
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included updating the content recommendations as taught by Singh in the comfort measurement computer-readable medium of Boissière in order to ensure that the recommendations remain relevant and engaging for the user (Singh: [0092]).
Claim 20:
Boissière and Singh teach the non-transitory computer-readable medium of claim 19. Boissière does not disclose:
wherein the user comprises a group of users and wherein the monitoring the non-verbal reactions of the user to the selected item of content comprises monitoring the non-verbal reactions of each user of the group of users to the selected item of content;
wherein the processing the non-verbal reactions of the user to the selected item of content to determine a response of the user to the selected item of content comprises processing the non-verbal reactions of each user of the group of users to the selected item of content to determine a response of the user to the selected item of content; and
wherein the modifying presentation of the selected item of content based on the determined response of the user to the selected item of content further comprises modifying presentation of the selected item of content to each user of the group of users based on the determined response of the user to the selected item of content.
Boissière does disclose that the non-verbal reactions of users are monitored to determine a response of the user (Boissière: [Col. 12, ln. 42-48]).
However, Singh teaches:
wherein the user comprises a group of users and wherein the monitoring the non-verbal reactions of the user to the selected item of content comprises monitoring the non-verbal reactions of each user of the group of users to the selected item of content {Singh: [0063] system may be used in a virtual environment where the user is browsing for media content; [0065] system may also support collaborative shopping experiences, aggregating and analyzing the gaze patterns and preferences of multiple users};
wherein the processing the non-verbal reactions of the user to the selected item of content to determine a response of the user to the selected item of content comprises processing the non-verbal reactions of each user of the group of users to the selected item of content to determine a response of the user to the selected item of content {Singh: [0063] system may analyze the user's gaze patterns as they interact with the virtual environment, identifying the user's preferences and interests; [0065] multiple users can share their gaze data, within the same XR environment. By aggregating and analyzing the gaze patterns and preferences of multiple users, the system is able to generate more comprehensive and relevant product recommendations that cater to the needs and preferences of the entire group.}; and
wherein the modifying presentation of the selected item of content based on the determined response of the user to the selected item of content further comprises modifying presentation of the selected item of content to each user of the group of users based on the determined response of the user to the selected item of content {Singh: [0065] By aggregating and analyzing the gaze patterns and preferences of multiple users, the system is able to generate more comprehensive and relevant product recommendations that cater to the needs and preferences of the entire group; [0159] virtual overlay 140, comprises a modified advertisement 146. This modified advertisement is tailored to the user's current attributes of interest, as determined by the system, enhancing its relevance and potential impact.}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included collaborative use of the XR system as taught by Singh in the comfort measurement non-transitory computer-readable medium of Boissière in order to generate more comprehensive and relevant product recommendations that cater to the needs and preferences of the entire group (Singh: [0065]).
Claim 23:
Boissière and Singh teach the method of claim 10. Boissière further discloses:
updating a profile of the user in connection with the media system based on the determined response of the user to the stimulus {Boissière: [Col. 14, ln. 18-29] user device determines criterion while displaying virtual reality content and stores the criterion in the user's profile. In some embodiments, the user device determines the criterion based on user biometric characteristics measured after receiving the request to access the virtual reality content store.}.
Claims 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Boissière (US 11107282 B1, herein referred to as Boissière), in view of Singh et. al. (US 20250005868 A1, herein referred to as Singh), in further view of Ross et. al. (US 20220058942 A1, herein referred to as Ross).
Claim 4:
Boissière and Singh teach the method of claim 1. Boissière does not disclose:
wherein the subvocalization signals are detected using an electrode positioned on a jaw of the user to detect neuromuscular signals.
Boissière does disclose that the user’s facial expressions can be monitored (Boissière: [Col. 12, ln. 1-2]).
However, Ross teaches:
wherein the subvocalization signals are detected using an electrode positioned on a jaw of the user to detect neuromuscular signals {Ross: [0053] activation accessories 19 positioned over a wearer's masseter muscle. In FIG. 3, the activation accessory 19 may be supported in or on an extension 23 of headset 44 to position the activation accessory 19 over the masseter muscle, that is positioned to contact the right or left side of the wearer's face within an area below the ear canal to the bottom of the mandible; [0056] The activation accessory may include more than one Hall effect sensor 16, with the multiple sensors arranged with respect to one another so as to permit individual and/or group activation thereof by associated volitional jaw clench actions of the wearer; [0065] a device which may be used in place of the Hall effect sensor 16 include an EMG sensor or a piezo switch. The electrical pulse is produced when the piezoelectric element is placed under stress, for example as a result of compressive forces resulting from movement of a movable actuator 8 responsive to a wearer clenching his/her jaw so that pressure is exerted against the piezoelectric element}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included a piezoelectric sensor on a user’s jaw as taught by Ross in the comfort measurement method of Boissière and Singh in order to allow for hand-free operation (Ross: [0039]).
Claim 12:
Boissière and Singh teach the system of claim 11. Boissière further discloses:
wherein the detected non-verbal reaction comprises subvocalization signals {Boissière: [Col. 12, ln. 1-2] user device 204 monitors the user's facial expressions}.
Although disclosing that the device can monitor the user’s facial movements, Boissière does not disclose:
the sensing device comprises at least one electrode positioned on a jaw of the user to detect neuromuscular signals.
However, Ross teaches:
the sensing device comprises at least one electrode positioned on a jaw of the user to detect neuromuscular signals {Ross: fig 1, activation accessory 19; [0053] activation accessories 19 positioned over a wearer's masseter muscle. In FIG. 3, the activation accessory 19 may be supported in or on an extension 23 of headset 44 to position the activation accessory 19 over the masseter muscle, that is positioned to contact the right or left side of the wearer's face within an area below the ear canal to the bottom of the mandible; [0056] The activation accessory may include more than one Hall effect sensor, with the multiple sensors arranged with respect to one another so as to permit individual and/or group activation thereof by associated volitional jaw clench actions of the wearer; [0065] a device which may be used in place of the Hall effect sensor include an EMG sensor or a piezo switch. The electrical pulse is produced when the piezoelectric element is placed under stress, for example as a result of compressive forces resulting from movement of a movable actuator 8 responsive to a wearer clenching his/her jaw so that pressure is exerted against the piezoelectric element}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included a piezoelectric sensor on a user’s jaw as taught by Ross in the comfort measurement system of Boissière and Singh in order to allow for hand-free operation (Ross: [0039]).
Claims 21 is rejected under 35 U.S.C. 103 as being unpatentable over Boissière (US 11107282 B1, herein referred to as Boissière), in view of Singh et. al. (US 20250005868 A1, herein referred to as Singh), in further view of Plankey et. al. (US 11922974 B1, herein referred to as Plankey).
Claim 21:
Boissière and Singh teach the method of claim 10. Neither Boissière nor Singh disclose:
wherein modifying the presentation comprises muting audio of the second item of multimedia content.
Boissière discloses multiple items of multimedia content (Boissière: [Col. 15, ln. 45-49]), and Singh does disclose that the audio can be modified (Singh: [0162]).
However, Plankey teaches:
wherein modifying the presentation comprises muting audio of the item of multimedia content {Plankey: [Col. 21, ln. 33-37] custom multimedia portions may also include custom audio options as indicated in FIG. 9A, including but not limited to a silent audio.}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included silencing audio as taught by in the comfort measurement system of Boissière and Singh in order to provide improved efficiencies in exporting multimedia sales promotions to different types of websites so that postings on some websites can be automatically modified (Plankey: [Col. 6, ln. 42-45]).
Claim 22 is rejected under 35 U.S.C. 103 as being unpatentable over Boissière (US 11107282 B1, herein referred to as Boissière), in view of Singh et. al. (US 20250005868 A1, herein referred to as Singh), in further view of Malak et. al. (US 20150248722 A1, herein referred to as Malak).
Claim 22:
Boissière and Singh teach the method of claim 10. Neither Boissière nor Singh disclose:
wherein modifying the presentation comprises blurring video of the second item of multimedia content.
Boissière discloses multiple items of multimedia content (Boissière: [Col. 15, ln. 45-49]). Singh discloses displaying overlays that are semi-transparent or opaque (Singh: [0058]) and that the video can be modified (Singh: [0162]).
However, Malak teaches:
wherein modifying the presentation comprises blurring video of the item of multimedia content { [0084] The masked video stack containing the two videos is presented to a user as one video via the same process described above with regards to the Split-screen module. The two videos differ visually by, for example, having been previously created so that the main video 200 is blurry, and the second video 202 is in focus.}.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have included blurring one of the two videos as taught by Malak in the comfort measurement system of Boissière and Singh to enable new functionality, interactivity, and/or shoppability to enhance the user experience (Malak: [0032]).
Response to Arguments
With respect to the claim objections, Applicant’s amendments render the claim objections moot. However, in view of the amendments, new grounds of objection have been applied. These new grounds of objection have been necessitated by Applicant’s amendments.
With respect to the rejections under 35 U.S.C. 101, Applicant’s arguments have been considered but are not persuasive. However, in view of the amendments, new grounds of rejection have been applied. These new grounds of rejection have been necessitated by Applicant’s amendments.
With respect to page 9 of the Remarks, Applicant argues “Examiner's assertions that ‘the additional elements merely include instructions to implement the abstract ide on a generic computer or merely use a generic computer as a tool to perform an abstract idea’ and that the additional elements ‘are recited and described in a generic manner [and] merely amount to an instruction to apply the abstract idea using a generic computer or merely use a generic computer as a tool to perform an abstract idea’ ignores the specific and detailed limitations recited in the claims as amended.” However, Examiner respectfully disagrees.
The MPEP at § 2106.04(d), section I provides guidance on how to evaluate whether claims recite a practical application. Specifically, the MPEP states “a claim that integrates a judicial exception into a practical application will apply, rely on, or use the judicial exception in a manner that imposes a meaningful limit on the judicial exception, such that the claim is more than a drafting effort designed to monopolize the judicial exception” where “an additional element reflects an improvement in the functioning of a computer, or an improvement to other technology or technical field” and “an additional element applies or uses the judicial exception in some other meaningful way beyond applying the abstract idea to a computer.” The MPEP further explains that claims that do not recite integration into a practical application include claims merely including instructions to implement an abstract idea on a computer or using a computer as a tool to perform an abstract idea. See also MPEP § 2106.05(f) and 2106.05(h).
In this case, Applicant’s specification provides no explanation of an improvement to the functioning of a computer or other technology beyond applying the abstract idea to a generic computer. Rather, the claims focus “on a process that qualifies as an ‘abstract idea’ for which computers are invoked merely as a tool.” Enfish LLC v. Microsoft Corp., 822 F.3d 1327, 1335-36 (Fed. Cir. 2016). This is reflected in paragraphs [0011]-[0013] of Applicant’s specification, which describe Applicant’s claimed invention is directed toward solving abstract problems such as recommending content to a user. Although the claims include computer technology, such elements are merely peripherally incorporated in order to implement the abstract idea. This is unlike the improvements recognized by the courts in Enfish.
Unlike the precedential case, where the claims are directed to a specific improvement to the way computers operate, embodied in a self-referential table, neither the specification nor the claims of the instant invention identify such a specific improvement to computer capabilities beyond application to a generic computer. The instant claims are not directed to improving “the existing technological process” but are directed to improving the commercial and mental task of making content recommendations based on user reactions. The claimed process is not providing any improvement to another technology or technical field as the claimed process is not, for example, improving the processor and/or computer components that operate the system. Rather, the claimed process is utilizing different data while still employing the same the processor and/or computer components used to improve making content recommendations based on user reactions, i.e. commercial and mental process. As such, the claims do not recite specific technological improvements beyond application to a generic computer. Therefore, the rejection is maintained in this aspect.
With respect to the rejections under 35 U.S.C. 103, Applicant’s arguments have been considered but are not persuasive. However, in view of the amendments, new grounds of rejection have been applied. These new grounds of rejection have been necessitated by Applicant’s amendments.
With respect to page 10 of the Remarks, Applicant argues “independent claim 1 recites:
processing by a processor of the media system the detected at least one non-verbal reaction to determine a response of the user to the stimulus ...; identifying ones of the multimedia content items stored in the content database based on the determined response of the user to the stimulus, wherein the selecting is performed based on metadata associated with the first item of media content and metadata associated the plurality of multimedia content items stored in the content database; [and] displaying on the display device a list of recommendations, wherein the list of
recommendations comprises the identified ones of the plurality of multimedia content items stored in the content database” and “no combination of the cited references teaches or suggests all limitations of amended claim 1.” However, Examiner respectfully disagrees.
Obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007).
Boissière discloses a system with processors that detects user biometrics (i.e., non-verbal reactions) while viewing virtual reality content (Boissière: fig 1B; [Col. 12, ln. 42-48]). The user’s biometrics indicate how the user is reacting to the virtual reality content by tracking their activity levels (Boissière: [Col. 11, ln. 44-51]). The user’s activity levels are then used to identify and recommend other virtual reality content that is appropriate for the user based on their physiological state (Boissière: [Col. 12, ln. 40-52]). The recommendations are displayed to the user through a virtual reality content store (Boissière: [Col. 15, ln. 42-49]). Although disclosing a virtual reality system that allows users to access a virtual reality content store that recommends content based on the user’s physiological responses, Boissière does not disclose multimedia content items stored in a content database or metadata, but Singh teaches a content database for digital media assets, games, or virtual reality experiences for purchase that stores metadata for the content (Singh: [0077], [0086], [0129]). Singh is merely relied on to demonstrate that it is predictable for one having ordinary skill in the art to incorporate a content database and metadata in the virtual content comfort system of Boissière, and modifying Boissière to include the elements of Singh would be obvious because it would have an organized manner that facilitates efficient retrieval and analysis of product information (Singh: [0056]). Therefore, the combination of Boissière and Singh do teach the amendments cited by Applicant, and the rejection is maintained.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHERINE A BARLOW whose telephone number is (571)272-5820. The examiner can normally be reached Monday-Tuesday 11am-7pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at 571-272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KATHERINE A BARLOW/Examiner, Art Unit 3689 /VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 3/4/2026