DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-12 are rejected under 35 U.S.C. 103 as being unpatentable over Bradski (U.S. Patent App. Pub. No. 2019/0094981) in view of Lindeman (U.S. Patent App. Pub. No. 2016/0196694)
Regarding claim 1:
Bradski teaches: an immersive display (e.g. Fig. 3: 30, a head mounted display, in combination with para. 1457, can be “immersive” like in the example of “an immersive virtual bookstore” of this example) comprising: a body (Fig. 3: 30, the HMD has a body); at least one display inside of the body for displaying images to a user of the immersive display (Fig. 3: 33 display component); processor coupled to the at least one display (para. 206, processor can be externally coupled, internal, integrated, or any combination) and operable to:
obtain first images for display on the at least one display, where the first images are generated at least in part by software (para.13, first images can be virtual content. Bradski has numerous examples of virtual content via programs/software (para. 175-79) to generate the virtual content/virtual worlds);
obtain second images for display on the at least one display from one or more cameras (Fig. 3: 32, sensors including outward-facing cameras, these second images can be used for augmented reality mode, per para. 214. See also background, para. 3, second camera images (i.e. real-world images) combined with virtual is AR, which Bradski teaches extensively);
display the first images (paras. 175-77, displaying first images as either virtual content or virtual world);
measure user eye movement (see “Gaze Tracking” beginning at para. 1005. This is measurement of user eye movement / this is alternatively taught by Fig. 118: 11808, eye tracking cameras);
detect whether the user repeatedly looked at a set area of the display (para. 1003, “information regarding what virtual and/or real objects produced the most number/time/frequency of eye gazes or stares. This may further allow the system to understand a user's interest in a particular virtual or real object”. The “Set area” is the area of the display as defined by a user is repeatedly looking at).
Regarding: in response to the detecting, overlay the second images over a portion of the display proximate to where the user repeatedly looked, consider the following. In analogous art, Lindeman teaches “a computerized method for including at least a portion of a real world view of an environment in which a virtual reality headset is located into a visual virtual world displayed by the virtual reality headset” (para. 17). The method includes, “ receive image data from a camera mounted on the virtual reality headset, the image data including real-world images (corresponding to Applicant’s claimed “second images”)…, identify at least one physical tool from the image data …, segment image data for the at least one physical tool from other image data, and display the segmented image data of the at least one physical tool in the virtual world displayed by the virtual-reality headset (this corresponds to the overlay second images over a portion of the display)” (para. 17).
Modifying the applied references, such that the real world or second images, per Bradski or Lindeman, are overlaid in response to detecting user eye motion, per Bradski, as an indication of intent or desire to interact, per Bradski, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The prior art included each element recited in claim 1, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 2:
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: the immersive display of claim 1, where after overlaying the second images over the portion of the display proximate to where the user repeatedly looked, detecting whether the user looked away from the set area of the display, and in response to the detecting, remove the overlay of the second images, and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
Bradski teaches that where the user is looking can support rendering (e.g. para. 900, para. 1061: “The AR system may be responsive to various user interactions or gestures, including looking at some item of virtual content”; and para. 1003: “ information regarding what virtual and/or real objects produced the most number/time/frequency of eye gazes or stares. This may further allow the system to understand a user's interest in a particular virtual or real object.”). This also includes a user not looking and interest can be ascertained as no longer present. Modifying the applied references, such to have included the above, and removed the overlay (i.e. user no longer has interest), as a way to control rendering, and also motivated to alleviate processing load for items the user does not care for, is all of taught and suggested by the prior art, and would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 3:
Bradski teaches: the immersive display of claim 1, where the set area of the display is a top portion of the display (see above mapping to claim 1. An area being top is one embodiment and use case of a user looking at a display).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to be responsive to user interest.
Regarding claim 4:
Bradski teaches: the immersive display of claim 1, where the second images are images from behind the user (e.g. para. 587, 689, outward cameras can be 360, which include images behind a user).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to be take advantage of views from all around a user in terms of delivering content.
Regarding claim 5:
Bradski teaches: the immersive display of claim 1, where the second images are images from between 180 degrees and 360 degrees around the user (e.g. para. 587, 689, outward cameras can be 360, which include images behind a user).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to be take advantage of views from all around a user in terms of delivering content.
Regarding claim 6:
Bradski teaches: the immersive display of claim 1, where the set area of the display is a right portion of the display (see above mapping to claim 1. A right area is one embodiment and use case of a user looking at a display).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to be responsive to user interest.
Regarding claim 7:
Bradski teaches: the immersive display of claim 1, where the set area of the display is a left portion of the display (see above mapping to claim 1. Left area is one embodiment and use case of a user looking at a display).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to be responsive to user interest.
Regarding claim 8:
Bradski teaches: the immersive display of claim 1, where the set area of the display is a bottom portion of the display (see above mapping to claim 1. Bottom area is one embodiment and use case of a user looking at a display).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to be responsive to user interest.
Regarding claim 9: see also claim 1.
It would have been obvious for one of ordinary skill in the art to have further modified the applied reference(-s), in view of same, to have obtained: an immersive display, comprising: a body; at least one display inside of the body for displaying images to a user of the immersive display; a processor coupled to the at least one display (see mapping to claim 1) and operable to:
obtain first images for display on the at least one display, where the first images are generated by software (see mapping to claim 1);
obtain second images (see mapping to claim 1) from a forward facing camera and third images from a rear-facing camera for display on the at least one display (Bradski, e.g. para. 587, 689, outward cameras can be 360, which include images behind a user).;
render the first images with an opacity of less than 100% (Lindeman, para. 67, 69, opacity of images can be modified via panel modification, or by software rendering means);
display the first images on the at least one display (mapping to claim 1, or Lindeman, para. 67, 69);
display the second images and third images on the at least one display (the concept of overlaying real images is known, as mapped in claim 1 via Lindeman. See also Lindeman, paras. 45-47, 53), “introducing real-world objects into a virtual world” per para. 45 – this teaches/suggests more than one real world object (i.e. second and third images) to be displayed with a real world), where the second and third images are separated by a visible line to delineate the images (example: Lindeman, Fig. 2B, displaying images separated by lines is known, and an obvious design choice to one of ordinary skill), and the results of the modification would have been obvious and predictable to one of ordinary skill in the art as of the effective filing date of the claimed invention. See MPEP §2143(A).
The prior art included each element recited in claim 9, although not necessarily in a single embodiment, with the only difference being between the claimed element and the prior art being the lack of actual combination of certain elements in a single prior art embodiment, as described above.
One of ordinary skill in the art could have combined the elements as claimed by known methods, and in that combination, each element merely performs the same function as it does separately. One of ordinary skill in the art would have also recognized that the results of the combination were predictable as of the effective filing date of the claimed invention.
Regarding claim 10:
Lindeman teaches: the immersive display of claim 9, where the opacity is approximately 50% (paras. 67, 69; and also, alternatively, an obvious design choice).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to have flexibility in design rendering.
Regarding claim 11:
Lindeman and/or Bradski teach: the immersive display of claim 9, where the second image is displayed vertically above the third image (neither reference is limiting about where to place images, and/or this is an obvious design choice, and/or obvious embodiment over the prior art, as mapped above in claims 1 and 9).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to have flexibility in design rendering.
Regarding claim 12:
Lindeman and/or Bradski teach: the immersive display of claim 9, where the second image is displayed horizontally separate from the third image (neither reference is limiting about where to place images, and/or this is an obvious design choice, and/or obvious embodiment over the prior art, as mapped above in claims 1 and 9).
It would have been obvious for one of ordinary skill in the art, as of the effective filing date of Applicant’s claims, to have further modified the applied reference(-s) in view of same to have obtained the above, motivated to have flexibility in design rendering.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sarah Lhymn whose telephone number is (571)270-0632. The examiner can normally be reached M-F, 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Sarah Lhymn
Primary Examiner
Art Unit 2613
/Sarah Lhymn/Primary Examiner, Art Unit 2613