DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to the abstract idea of “controlling a display device to display a first image to a user; detecting a gaze location of the user corresponding to a portion of the first image observed by the user; acquiring one or more content properties of the first image displayed to the user based on an analysis of the first image; acquiring user response data relating to the user's autonomic response to the display of the first image; and determining a state of the user's mental condition based on the user response data, the one or more content properties of the first image and the gaze location of the user, wherein the mental condition includes at least one of anxiety and/or depression” without significantly more.
Step 1:
Claims 1-21 recite a device which is a product, and thus falls into a statutory category.
Claims 22-24 recite a method and thus fall into a statutory category.
Claims 26-29 recite a computer program product, which is a product and thus falls into a statutory category.
Step 2A, Prong 1
Claim 1 recites an information processing apparatus comprising circuitry configured to:
display a first image to a user;
detect a gaze location of the user corresponding to a portion of the first image observed by the user;
acquire one or more content properties of the first image displayed to the user based on an analysis of the first image;
acquire user response data relating to the user's autonomic response to the display of the first image; and
determine a state of the user's mental condition based on the user response data, the one or more content properties of the first image and the gaze location of the user, wherein the mental condition includes at least one of anxiety and/or depression.
All of these limitations, at the broadest reasonable interpretation, cover a method of organizing human activity and mental process except for the recitation of “information processing apparatus comprising circuitry” and “display”. See MPEP 2106.04(a)(2)II and III. All of these steps, other than reciting that a circuitry and display is performing these tasks, can be performed by a human, by, for example, showing an image to a user, determining if the user is looking at the image, determining a color of the image, detecting the user’s facial expression (e.g. if they are smiling or frowning) and then concluding whether the person is happy or sad.
As such, the claims are directed towards an abstract idea.
Step 2A, Prong 2
The claims additionally elements include “information processing apparatus comprising circuitry” and “display.” The “circuitry” and “display” are recited at a high level of generality, i.e. as generic circuitry, performing a generic computer function of processing and displaying data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. Accordingly, this additionally limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Step 2B
As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception on a generic computer cannot integrate a judicial except into a practical application at Step 2A or provide an inventive concept in Step 2B.
Under 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be re-evaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The specification does not provide any indication that the computer processor is anything other than a generic, off-the-shelf computer component. Court decisions cited in MPEP 2106.05(d)(II) indicate that computer-implemented processes not to be significantly more than an abstract idea (and thus ineligible) where the claim as a whole amounts to nothing more than generic computer functions merely used to implement an abstract idea, such as an idea that could be done by a human analog (i.e., by hand or by merely thinking). Accordingly, the generic computer functions merely are being used to implement an abstract idea is well-understood, routine, conventional activity.
For these reasons, there is no inventive concept in the claim and thus it is ineligible.
Dependent claims 2, 3, and 7-18 further limits the abstract idea already indicated in independent claim 1 and they are ineligible for the same reasons provided for claim 1 above.
Dependent claim 4 further includes a displace device. This generic display device limitation is no more than mere instructions to apply the exception using a generic display component, which is nothing more than a part of a generic, off-the-shelf computer. Accordingly, this additionally limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Dependent claims 5-6 further includes a generic eye tracking devices, which is nothing more than a part of a generic, off-the-shelf computer (see [0211] of the specification). These generic eye tracking device limitations are no more than mere extra-solution activity of data gathering and generic computer functions merely used to implement an abstract idea. Accordingly, this additionally limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea.
Independent claims 19-21 contain all of the limitations as included claim 1 above.
Independent claims 22-25 are method claims but contain all of the limitations as included in claim 1 above.
Independent claims 26-29 are computer program product claims but contain all of the limitations as included in claim 1 above.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. ‘
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-9, 12-29 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Samec (US 2017/0365101).
Regarding claim 1, Samec discloses an information processing system (fig. 90, refs. 140, 150), the information processing apparatus comprising circuitry configured to:
control a display device to display a first image to a user (par. 546: presenting stimulus to the user such as a nearby car on the road at block 1710);
detect a gaze location of the user corresponding to a portion of the first image observed by the user (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze);
acquire one or more content properties of the first image displayed to the user based on an analysis of the first image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710);
acquire user response data relating to the user's autonomic response to the display of the first image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720); and
determine a state of the user's mental condition based on the user response data, the one or more content properties of the first image and the gaze location of the user, wherein the mental condition includes at least one of anxiety and/or depression (e.g. para. 536; determining user’s mental condition as it relates to anxiety at block 1730).
Regarding claim 2, Samec additionally discloses wherein the circuitry is further configured to generate a second image for display to the user, the second image being generated in accordance with the determined state of the user's mental condition (e.g. para. 532, use of “masked images” to calm the user down).
Regarding claim 3, Samec additionally discloses wherein the circuitry is further configured to acquire the first image from a content server (e.g. para. 474; remote data repository 160 that includes data related to the virtual content as shown in Figure 9D).
Regarding claim 4, Samec additionally discloses a display device and wherein the display device is one of at least a virtual reality display, an augmented reality display or a widescreen display screen (e.g. Abstract; augmented reality display).
Regarding claim 5, Samec additionally discloses wherein the circuitry is further configured to detect a gaze location of the user using an eye tracking device (e.g. para. 545; eye tracking cameras 24 as shown in Figure 10).
Regarding claim 6, Samec additionally discloses wherein the eye-tracking device comprises eye-facing cameras (e.g. para. 545; eye tracking cameras 24 as shown in Figure 10).
Regarding claim 7, Samec additionally discloses wherein the circuitry is further configured to analyze the first image in order to acquire the one or more content properties of the first image (e.g. as disclosed in para. 532).
Regarding claim 8, Samec additionally discloses wherein the circuitry is further configured to analyze the one or more content properties of the first image based on at least one of pixel colour, pixel brightness, object recognition, properties of the image scene and/or image metadata (e.g. determination that the use sees the object in para. 532).
Regarding claim 9, Samec additionally discloses wherein the one or more content properties of the first image are the location of emotional stimuli within the first image (e.g. para. 546; registering and identifying the stimulus).
Regarding claim 12, Samec additionally discloses wherein the user response data comprises at least one of information relating to pupillary diameter, pupillary saccadic motion, facial expression, skin conductance, facial blood flow, heart rate and/or skin temperature (e.g. para. 561; detection of user's heart rate, sweating, or other physiological signs).
Regarding claim 13, Samec additionally discloses one or more detecting devices configured to detect the user response data (e.g. para. 476; user sensors 24,28,30,32).
Regarding claim 14, Samec additionally discloses wherein the circuitry is further configured to determine the state of the user's mental condition using a trained model (e.g. para. 865; use of computer vision algorithms and machine learning models).
Regarding claim 15, Samec additionally discloses wherein the circuitry is further configured to adapt the first image by adapting the one or more content properties of the first image (e.g. para. 532; use of masked images).
Regarding claim 16, Samec additionally discloses wherein the circuitry is further configured to generate the second image for display to the user by adapting the first image in accordance with the determined state of the user's mental condition (e.g. para. 532 modulation of masked images).
Regarding claim 17, Samec additionally discloses wherein the circuitry is further configured to generate the second image data for display to the user by acquiring the second image data from a content server in accordance with the determined state of the user's mental condition (e.g. para. 533; use of the database of masked images).
Regarding claim 18, Samec additionally discloses wherein the circuitry is further configured to generate feedback in accordance with the determined state of the user's mental condition (e.g. para. 410; generating feedback to the user).
Regarding claim 19, Samec discloses an information processing system (fig. 90, refs. 140, 150), the information processing apparatus comprising circuitry configured to:
detect a gaze location of a user corresponding to a portion of an image observed by the user on a display device (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze);
acquire one or more content properties of the image displayed to the user based on an analysis of the first image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710);
acquire user response data relating to the user's autonomic response to the display of the image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720); and
determine a state of the user's mental condition based on the user response data, the one or more content properties of the first image and the gaze location of the user, wherein the mental condition includes at least one of anxiety and/or depression (e.g. para. 536; determining user’s mental condition as it relates to anxiety at block 1730).
Regarding claim 20, Samec discloses information processing system, the information processing apparatus comprising circuitry configured to:
determine a state of a user's mental condition (e.g. para. 536; determining user’s mental condition as it relates to anxiety at block 1730) based on user response data relating to:
a user's autonomic response to display of an image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720);
one or more content properties of the image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710); and
a gaze location of the user corresponding to a portion of the image observed by the user (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze); and
generate a second image for display to the user, the second image being generated in accordance with the determined state of the user's mental condition (e.g. para. 532, use of “masked images” to calm the user down).
Regarding claim 21, Samec discloses an information processing system (fig. 90, refs. 140, 150), the information processing apparatus comprising circuitry configured to:
detect a gaze location of a user corresponding to a portion of an image observed by the user on a display device (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze);
acquire one or more content properties of the image displayed to the user based on an analysis of the first image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710) and;
acquire user response data relating to the user's autonomic response to the display of the image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720).
Regarding claim 22, Samec discloses an information processing method (fig. 90, refs. 140, 150) comprising:
controlling a display device to display a first image to a user (par. 546: presenting stimulus to the user such as a nearby car on the road at block 1710);
detecting a gaze location of the user corresponding to a portion of the first image observed by the user (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze);
acquiring one or more content properties of the first image displayed to the user based on an analysis of the first image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710);
acquiring user response data relating to the user's autonomic response to the display of the first image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720); and
determining a state of the user's mental condition based on the user response data, the one or more content properties of the first image and the gaze location of the user, wherein the mental condition includes at least one of anxiety and/or depression (e.g. para. 536; determining user’s mental condition as it relates to anxiety at block 1730).
Regarding claim 23, Samec discloses an information processing method (fig. 90, refs. 140, 150) comprising:
detecting a gaze location of the user corresponding to a portion of the first image observed by the user on a display device (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze);
acquiring one or more content properties of the first image displayed to the user based on an analysis of the first image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710);
acquiring user response data relating to the user's autonomic response to the display of the first image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720); and
determining a state of the user's mental condition based on the user response data, the one or more content properties of the first image and the gaze location of the user, wherein the mental condition includes at least one of anxiety and/or depression (e.g. para. 536; determining user’s mental condition as it relates to anxiety at block 1730).
Regarding claim 24, Samec discloses information processing method comprising:
determining a state of a user's mental condition (e.g. para. 536; determining user’s mental condition as it relates to anxiety at block 1730) based on user response data relating to:
a user's autonomic response to display of an image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720);
one or more content properties of the image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710); and
a gaze location of the user corresponding to a portion of the image observed by the user (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze); and
generating a second image for display to the user, the second image being generated in accordance with the determined state of the user's mental condition (e.g. para. 532, use of “masked images” to calm the user down).
Regarding claim 25, Samec discloses an information processing method(fig. 90, refs. 140, 150) comprising:
detecting a gaze location of a user corresponding to a portion of an image observed by the user on a display device (e.g. par. 547; block 1720 wherein the user’s reaction is determined including eye gaze);
acquiring one or more content properties of the image displayed to the user based on an analysis of the first image (e.g. para. 546; determination that, for example, a nearby car was on the road at 1710) and;
acquiring user response data relating to the user's autonomic response to the display of the image (e.g. para. 547 determining alertness qualities based on detected reaction at block 1720).
Regarding claims 26-29, Samec additionally discloses computer program products capable of performing the methods of claims 21-25
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Amanda K Hulbert whose telephone number is (571)270-1912. The examiner can normally be reached Monday - Friday 9:00-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Unsu Jung can be reached at 571-272-8506. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Amanda K Hulbert/Primary Examiner, Art Unit 3792