Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim 1, claim 12 rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 and claim 14, US 12253882. Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variation of each other. Current application appears to be broader recitation of the patented one.
Current Application
US 12253882
1. A system comprising: an input device, a wearable computing device, and a display to provide an augmented reality or virtual reality ("AR/VR") environment for a user, the AR/VRR environment containing virtual elements; one or more bio-signal sensors to receive bio-signal data from one or more other users, the bio-signal sensors;
the computing device having or in communication with a processor configured to: as part of the AR/VR environment, present content at least on the display where the content includes the virtual elements occurring within the AR/VR environment;
continuously receive user manual inputs from the input device for user interaction with the virtual elements in the AR/VR environment;
continuously receive the bio-signal data of the one or more other users from the one or more bio-signal sensors; process the bio-signal data to extract and select features of the bio-signal data; determine user states of the one or more other users, including brain states, using a prediction model and based on the features of the bio-signal data; and modify the AR/VRR environment based on the user states and the user manual inputs.
12. A method of presenting an augmented reality or virtual reality (“AR/VR”) environment for a user, the method comprising: as part of the AR/VR environment, presenting content where the content includes the virtual elements occurring within the AR/VR environment to the user; continuously receiving user manual inputs for user interaction with the virtual elements in the AR/VR environment; continuously receiving bio-signal data of one or more other users; processing the bio-signal data to extract and select features of the bio-signal data; determining user states of the one or more other users, including brain states, using a prediction model and based on the features of the bio-signal data; modifying the AR/VR environment based on the user states and the user manual inputs.
1. An apparatus comprising: an input device and a wearable computing device with a bio-signal sensor and a display to provide an augmented reality or virtual reality (“AR/VR”) environment for a user, the AR/VR environment containing virtual elements, the bio-signal sensor receives bio-signal data from the user, the bio-signal sensor comprising a brainwave sensor, the computing device having or in communication with a processor configured to: as part of the AR/VR environment, present content on the display where the content includes the virtual elements and has an AR/VR event occurring within the AR/VR environment, the AR/VR event having one or more changes on the virtual elements in the AR/VR environment; continuously receive user manual inputs from the input device for user interaction with the virtual elements in the AR/VR environment including during the AR/VR event;
continuously receive the bio-signal data of the user from the bio-signal sensor, including during the AR/VR event; process the bio-signal data to extract and select features of the bio-signal data; determine user states of the user, including brain states, using a prediction model and based on the features of the bio-signal data; modify the AR/VR environment based on the user states and manual input from a third party.
14. A method implemented using an input device and a wearable computing device having or in communication with a processor, a bio-signal sensor and a display to provide an augmented reality or virtual reality (“AR/VR”) environment for a user, the AR/VR environment containing a plurality of virtual elements, the bio-signal sensor receives bio-signal data from the user, the bio-signal sensor comprising a brainwave sensor; the method comprising: as part of the AR/VR environment, presenting content on the display where the content has an AR/VR event occurring within the AR/VR environment, the AR/VR event having one or more changes on at least a portion of the plurality of virtual elements in the AR/VR environment; continuously receiving user manual inputs from the input device for user interaction with the virtual elements in the AR/VR environment including during the AR/VR event; continuously receiving the bio-signal data of the user from the bio-signal sensor, including during the AR/VR event; processing the bio-signal data to extract and select features of the bio-signal data; determining user states of the user, including brain states, using a prediction model and based on the features of the bio-signal data; modifying the AR/VR environment based on the user states and manual input from a third party.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 6, 12, 13, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freer (US 20080275358) in view of Teller (US 20040133081).
Regarding claim 1 Freer teaches a system (fig.1) comprising:
an input device, a wearable computing device, and a display to provide an augmented reality or virtual reality ("AR/VR") environment for a user (Fig. 9 [0091]), the AR/VRR environment containing virtual elements ([0030]);
one or more bio-signal sensors to receive bio-signal data from one or more other users, the bio-signal sensors ([0006] employing brainwave monitors for determining level of attention for each team … a predetermined attention threshold, also see [0065]);
the computing device having or in communication with a processor ([0053] FIG. 3 in part represents software executing in a computer) configured to: as part of the AR/VR environment, present content at least on the display where the content includes the virtual elements occurring within the AR/VR environment;
continuously receive user manual inputs from the input device for user interaction with the virtual elements in the AR/VR environment ([0083] trainee 400 is using an attentional brainwave monitor 404 which is essentially identical to the attentional brainwave monitor 102 described hereinabove with reference to FIGS. 3A, 3B, 3C, 3D and 3E, and accordingly is not described in detail here. Very briefly, the attentional brainwave monitor 304 includes a sensor headband 406 connected to a brainwave monitoring device hardware unit 408 connected to a desktop computer 410 or personal computer (PC) 410, which includes a CPU unit 412 and a display 414. Also connected to the programmed computer 410 is a steering wheel input device 416. The programmed computer 410 implements the driving simulator 402 training environment 402, via the display 414 and the steering wheel input device 416. Implemented within the programmed computer 410 are functions related to the attentional brainwave monitor 404. Also implemented within the programmed computer 410 is an activation device 418 connected to the attentional brainwave monitor 404 and to the training environment 402, and operable to activate the training environment 402 when the determined level of attention of the trainee 400 is at or above a predetermined attention threshold for the trainee 400);
continuously receive the bio-signal data of the one or more other users from the one or more bio-signal sensors (Fig. 3, describes there are 2 different loop in the flow chart that disclosed continuously receive the bio-signal data of the user) from the bio-signal sensor during the VR event ([0091] FIG. 9 illustrates a law enforcement officer trainee 700 receiving job … sensor headband 706 integrated with the virtual reality goggles 702);
process the bio-signal data to extract and select features of the bio-signal data ([0086] trainees 502 and 504 are using respective attentional brainwave monitors 508 … when the determined level of attention of all team member trainees 502 and 504 is at or above a predetermined attention threshold for the particular team member trainee);
determine user states of the one or more other users ([0083] computer 410 is a steering wheel input device 416. The programmed computer 410 implements the driving simulator 402 training environment 402, via … training environment 402, and operable to activate the training environment 402 when the determined level of attention of the trainee 400 is at or above a predetermined attention threshold i.e. user state score, for the trainee 400), including brain states ([0051] The term electroencephalography (EEG) is generally employed to refer to the measurement of electrical activity produced by the brain as measured or recorded from electrodes placed on the scalp of a person. Such activity is commonly termed "brain wave" activity,); and modify the AR/VRR environment based on the user states and the user manual inputs ([0084] As shown in FIG. 6B, if the trainee 400 loses her focused attention state, the training environment 402 is inactivated. The trainee 400 is no longer able to interact with the driving simulator 402, and the driving simulator 402 pauses. When the trainee 400 regains her focused attention state, the training environment 402 is again "activated." The driving simulator 404 is restarted, and the trainee 400 is again able to interact with the driving simulator 402).
Freer is silent determine user states using a prediction model and based on the features of the bio-signal data.
However, Teller teaches determine user states using a prediction model and based on the features of the bio-signal data ([0006] a method of measuring a state parameter of an individual, including collecting a plurality of sensor signals from at least two sensors in electronic communication with a sensor device worn on a body of the individual, at least one of the sensors being a physiological sensor, and utilizing a first set of signals based on one or more of the plurality of sensor signals in a first function, the first function determining how a second set of signals based on one or more of the plurality of sensor signals is utilized in one or more second functions, each of the one or more second functions having an output, wherein one or more of the outputs are used to predict the state parameter of the individual, also see [0011] for bio-sensor).
Therefore, it would have been obvious to one of the ordinary skilled in the art to combine Freer in light of Teller so that it may include determine user states using a prediction model and based on the features of the bio-signal data.
The motivation is to provide apparatuses for measuring a state parameter of an individual using signals based on one or more sensors.
Regarding claim 2 Freer teaches herein the wearable computing device ([0065] a sensor headband 104 as shown in fig. 3A, computer 108 includes a CPU) further comprises a user bio-signal sensor to receive bio-signal data from the user ([0051] term electroencephalography (EEG) is generally employed to refer to the measurement of electrical activity produced by the brain as measured or recorded from electrodes placed on the scalp of a person … commonly termed "brain wave" activity), and wherein the processor receives the bio-signal data from the user as part of the bio-signal data (fig. 3A, element 104 also fig. 9, [0091]).
Regarding claim 3 Freer teaches wherein the processor is further configured to: analyze an emotional connection between the user and the one or more other users; and present the emotional connection in the ARNR environment ([0086] FIG. 7 illustrates a team 500 of two trainees 502 and 504 engaged in an interactive chemistry lesson training or learning environment 506. The trainees 502 and 504 are using respective attentional brainwave monitors 508 and 510, each of which is essentially identical to the attentional brainwave monitor 102 described hereinabove with reference to FIGS. 3A, 3B, 3C, 3D and 3E, and accordingly are not described in detail here. Briefly, the attentional brainwave monitors 508 and 510 include respective sensor headbands 512 and 514 connected to respective brainwave monitoring device hardware units 516 and 518. The hardware units 516 and 518 are connected to a single a desktop computer 520 or personal computer (PC) 520, which includes a CPU unit 522 and a touch screen computer display 526. Implemented within the programmed computer 520 are functions related to the attentional brainwave monitors 508 and 510, depending on design considerations for a particular system. Also implemented within the programmed computer 520 is an activation device 528 connected to the attentional brainwave monitors 508 and 510 and to the training environment 506, and operable to activate the training environment 506 when the determined level of attention of all team member trainees 502 and 504 is at or above a predetermined attention threshold for the particular team member trainee 502 or 504).
Regarding claim 4 Freer teaches wherein the emotional connection is analyzed as a cross-state or as neural synchrony ([0086] FIG. 7 illustrates a team 500 of two trainees 502 and 504 engaged in an interactive chemistry lesson training or learning environment 506. …. the programmed computer 520 is an activation device 528 connected to the attentional brainwave monitors 508 and 510 and to the training environment 506, and operable to activate the training environment 506 when the determined level of attention of all team member trainees 502 and 504 is at or above a predetermined attention threshold for the particular team member trainee 502 or 504 i.e. neural synchrony).
Regarding claim 6 Freer teaches one or more additional input devices, one or more additional wearable computing devices, and one or more additional displays to provide the AR/VR environment for the one or more other users ([0091] FIG. 9 illustrates a law enforcement officer trainee 700 receiving job training using virtual reality (VR) equipment represented by virtual reality goggles 702, as well as an attentional brainwave monitor 704 represented by a sensor headband 706 integrated with the virtual reality goggles 702. On a flat panel display 710 is a depiction of the scene the trainee 700 is experiencing with his virtual reality goggles 702, representing a training environment 712).
Regarding claim 12 the limitations are similar to the limitations of claim 1 so rejected same way.
Regarding claim 13 Freer teaches wherein continuously receiving the bio-signal data of the one or more other users further comprises continuously receiving bio-signal data of the user (fig.3, fig. 9, [0091]).
Regarding claim 16 Freer teaches as part of the ARVR environment, presenting the content to the one or more other users (fig. 9).
Claims 5, 14, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freer (US 20080275358) in view of Teller (US 20040133081) and further in view of Raskin (US 20150187188).
Regarding claim 5 Freer is silent on wherein the emotional connection is presented at least as a visual representation in the AR/VR environment.
However, Raskin teaches wherein the emotional connection is presented at least as a visual representation in the AR/VR environment ([0024] A simulation may be an imitation or a substantial reproduction of the first force. For example, the first force exerted on the first wearable device may be a compression. The second force on the second wearable device may also be a compression, a simulation of the compression exerted on the first wearable device. The compression may represent a "virtual hug" transmitted from the first user to the second user. Communication or transmission of a "virtual hug" may generate an emotional connection between the first user and the second user, also [0028]).
Therefore, it would have been obvious to one of the ordinary skilled in the art to combine Freer in light of Raskin so that it may include wherein the emotional connection is presented at least as a visual representation in the AR/VR environment.
The motivation is to provide communication using tactile stimuli on wearable devices.
Regarding claim 14 Freer in view of Raskin teach further comprising: analyzing an emotional connection between the user and the one or more other users (([0086] FIG. 7 illustrates a team 500 of two trainees 502 and 504 engaged in an interactive chemistry lesson training or learning environment 506. The trainees 502 and 504 are using respective attentional brainwave monitors 508 and 510, each of which is essentially identical to the attentional brainwave monitor 102 described hereinabove with reference to FIGS. 3A, 3B, 3C, 3D and 3E, and accordingly are not described in detail here. Briefly, the attentional brainwave monitors 508 and 510 include respective sensor headbands 512 and 514 connected to respective brainwave monitoring device hardware units 516 and 518. The hardware units 516 and 518 are connected to a single a desktop computer 520 or personal computer (PC) 520, which includes a CPU unit 522 and a touch screen computer display 526. Implemented within the programmed computer 520 are functions related to the attentional brainwave monitors 508 and 510, depending on design considerations for a particular system. Also implemented within the programmed computer 520 is an activation device 528 connected to the attentional brainwave monitors 508 and 510 and to the training environment 506, and operable to activate the training environment 506 when the determined level of attention of all team member trainees 502 and 504 is at or above a predetermined attention threshold for the particular team member trainee 502 or 504); and presenting the emotional connection in the ARNR environment, wherein the emotional connection is presented at least as a visual representation in the ARNR environment (Raskin: [0024).
Regarding claim 15 Freer teaches wherein the emotional connection is analyzed as a cross- state or as neural synchrony ([0086] FIG. 7 illustrates a team 500 of two trainees 502 and 504 engaged in an interactive chemistry lesson training or learning environment 506. …. the programmed computer 520 is an activation device 528 connected to the attentional brainwave monitors 508 and 510 and to the training environment 506, and operable to activate the training environment 506 when the determined level of attention of all team member trainees 502 and 504 is at or above a predetermined attention threshold for the particular team member trainee 502 or 504 i.e. neural synchrony).
Claims 7, 8, 10, 17, 18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freer (US 20080275358) in view of Teller (US 20040133081) and further in view of Leahy (US 20090177980).
Regarding claim 7 Freer teaches wherein each of the user and the one or more other users have a respective field of view of the AR/VR environment (fig. 9, [0091]).
Freer is silent on wherein each of the user and the one or more other users have a respective avatar in the AR/VR environment corresponding to a respective user.
However, Leahy wherein each of the user and the one or more other users have a respective avatar in the AR/VR environment corresponding to a respective user ([0017] FIG. 1 is an illustration of a client screen display 10 seen by one user in the chat system. Screen display 10 is shown with several stationary objects (wall, floor, ceiling and clickable object 13) and two "avatars" 18. Each avatar 18 is a three dimensional figure chosen by a user to represent the user in the virtual world. Each avatar 18 optionally includes a label chosen by the user. In this example, two users are shown: "Paula" and "Ken", who have chosen the "robot" avatar and the penguin avatar, respectively. Each user interacts with a client machine (not shown) which produces a display similar to screen display 10, but from the perspective of the avatar for that client/user. Screen display 10 is the view from the perspective of a third user, D, whose avatar is not shown since D's avatar is not within D's own view. Typically, a user cannot see his or her own avatar unless the chat system allows "our of body" viewing or the avatar's image is reflected in a mirrored object in the virtual world).
Therefore, it would have been obvious to one of the ordinary skilled in the art to combine Freer in light of Leahy so that it may include wherein each of the user and the one or more other users have a respective avatar in the AR/VR environment corresponding to a respective user.
The motivation is to provide a highly scalable architecture for a three-dimensional graphical, multi-user, interactive virtual world system.
Regarding claim 8 Freer in view of Leahy teach wherein at least one of the respective avatars (Leahy: fig.1, Paula, Ken) updates ([0008] ) based on the bio-signal data of at least one of the respective users ([0091] fig. 9, n attentional brainwave monitor 704 represented by a sensor headband 706 integrated with the virtual reality goggles 702).
Regarding claim 10 Freer in view of Leahy teaches wherein the AR/VR environment further includes a virtual pet (Leahy: fig.1, penguin) that responds to the bio-signal data of the user ([0091]).
Regarding claim 17 the limitations are similar to claim 7 so rejected same way.
Regarding claim 18 the limitations are similar to claim 8 so rejected same way.
Regarding claim 20 the limitations are similar to claim 10 so rejected same way.
Claims 9, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freer (US 20080275358) in view of Teller (US 20040133081) and further in view of Leahy (US 20090177980) and Dempski (US 20050010637).
Regarding claim 9 Freer is silent on further comprising at least one facial expression sensor for at least one of the respective users; and wherein at least one of the respective avatars updates based on an output from the at least one facial expression sensor.
However, Dempski teaches on further comprising at least one facial expression sensor for at least one of the respective users; and wherein at least one of the respective avatars updates based on an output from the at least one facial expression sensor ([0052]).
Therefore, it would have been obvious to one of the ordinary skilled in the art to combine Freer in light of Dempski so that it may include further comprising at least one facial expression sensor for at least one of the respective users; and wherein at least one of the respective avatars updates based on an output from the at least one facial expression sensor.
The motivation is to provide intelligent collaborative media to enhance the social experience.
Regarding claim 19 the limitations are similar to claim 9 so rejected same way.
Claims 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Freer (US 20080275358) in view of Teller (US 20040133081) and further in view of Park (KR 20130007767).
Regarding claim 11 Freer is silent on wherein the wearable computing device further comprises an output device to further present the content to the user comprising one or more of a vibrational and auditory output device.
However, Park teaches wherein the wearable computing device (fig. 1a, 200) further comprises an output device to further present the content to the user comprising one or more of a vibrational and auditory output device (feedback unit 120 serves to control the output sound through the vibration or attached to the mobile device 100. Vibrating the vibration unit to generate a feedback output (12), may comprise a speaker unit 122 for generating a sound output. Wearable display devices when a specific gesture is processed at 200, the wearable display device 100 or the feedback control by the binary information received from the external device ET 12 generates a vibration output, or the speaker unit 122 generates a sound output.).
Therefore, it would have been obvious to one of the ordinary skilled in the art to combine Freer in light of Park so that it may include wherein the wearable computing device further comprises an output device to further present the content to the user comprising one or more of a vibrational and auditory output device.
The motivation is to provide wearable display device and a content display method that enhance visualizations.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
-Nodelman US 7689521
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOWFIQ ELAHI whose telephone number is (571)270-1687. The examiner can normally be reached M-F: 10AM-3PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Boddie can be reached at (571)272-0666. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TOWFIQ ELAHI/Primary Examiner, Art Unit 2625