DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Following prior arts are considered pertinent to applicant's disclosure.
US 20160147308 A1 (Gelman)
Wang et. al. “Real time eye gaze tracking with Kinect” 2016 23rd International Conference on Pattern Recognition (ICPR); Cancún Center, Cancún, México, December 4-8, 2016
US 20240005825 A1 (Takahashi)
US 20210373671 A1 (Yang)
US 20240412730 A1 (Chae)
Allowable Subject Matter
Claims 4-5, 8, 26-27 are allowed.
Response to Remarks/Arguments
Rejection made under 35 USC 112 have been withdrawn in view of amendments to the claims.
Rejections of claims 4-5, 8, 26-27 have been withdrawn.
Applicant’s arguments with respect to rest of the claims are moot in view of the new grounds of rejection
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 11, 16, 19-21, 24 are rejected under 35 U.S.C. 103 as being unpatentable over Gelman in view of Wang.
Regarding Claim 1. Gelman teaches An image display system comprising:
a floating image display that displays a floating image as a real image in a predetermined displayable region in a three-dimensional space; [(para 118 & Fig.1B; floating display {para 114}; this is real display as produced in front of the eye without glasses etc.)] :
a sensor that detects a user in a vicinity of the displayable region and a subject and outputs detection information and image data; [(para 167, 132, 174)] and
control circuitry that:
identifies a position of eyes or a face of the user based on the detection information [(para 167)]
,identifies a gesture of the user [(para 212)] ,
and in response to the gesture of the user, controls the floating display to display an image of the subject as the floating image, based on the image data, and displays the floating image at a display position that is based on the three-dimension position of the eyes or the face of the user [(Fig. 4H; subject 454, para 194)]
Gelman does not explicitly show position of the eye is three-dimensional
However, in the same/related field of endeavor, Wang teaches position of the eye is three-dimensional in a Kinect system. [(Wang see Introduction)] A Kinect system is used in Gelman for the eye-tracking [(Gelman para 174)]
Therefore, in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would provide predictable result with no change of their respective functionalities.
Gelman teaches, w.r.t. Claim 11. The image display system according to claim 1, wherein the sensor includes a camera that photographs the subject and outputs the detection information and image data,. [(Gelman Fig.4H, para 468-469, 477)]
Gelman teaches, w.r.t. Claim 16. The image display system according to claim 1,wherein the subject designation gesture is an action in which the user points at the subject with a finger. [(Gelman para 131)]
Gelman teaches, w.r.t. Claim 19. (Currently Amended) The image display system according to claim 1,wherein when the gesture is a predetermined size designation gesture for enlarging/reducing size of the floating image in the displayable region, the control [[unit]] circuitry controls the floating image display [[unit]] so that the floating image is displayed in the size designated by the size designation gesture. .[(para 268, zooming)]
Gelman teaches, w.r.t. Claim 20. (Original) The image display system according to claim 19, wherein the size designation gesture is an action in which the user widens or narrows a distance between two fingers. .[(para 268)]
Gelman teaches, w.r.t. Claim 21. (Original) The image display system according to claim 19, wherein the size designation gesture is an action in which the user moves a finger closer to or farther from the displayable region. .[[(para 268)]
W.r.t. Claim 24. see analysis of claim 1
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Gelman in view of Wang in view of Takahashi.
Regarding Claim 18: Gelman does not explicitly show wherein the floating image display [[unit]] includes a first display [[unit]] that displays a first image, a second display [[unit]] that displays a second image, and a real image optical system that displays the first image or the second image as the floating image as the real image, a distance from the first display [[unit]] to the real image optical system is longer than a distance from the second display [[unit]] to the real image optical system, and the control [[unit]] circuitry identifies a gesture, including at least one of the body part movement and the posture of the user, based on the detection information, and when the gesture is a predetermined depth designation gesture designating a depth direction position of the displayable region, controls the floating image display [[unit]] so that the floating images displayed in the displayable region whose depth direction position is designated by the depth designation gesture.
However, in the same/related field of endeavor, Takahashi teaches wherein the floating image display [[unit]] includes a first display [[unit]] that displays a first image, a second display [[unit]] that displays a second image, and a real image optical system that displays the first image or the second image as the floating image as the real image, a distance from the first display [[unit]] to the real image optical system is longer than a distance from the second display [[unit]] to the real image optical system, and the control [[unit]] circuitry identifies a gesture, including at least one of the body part movement and the posture of the user, based on the detection information, and when the gesture is a predetermined depth designation gesture designating a depth direction position of the displayable region, controls the floating image display [[unit]] so that the floating images displayed in the displayable region whose depth direction position is designated by the depth designation gesture. . [(Takahashi Fig.3B; unit 1 and unit 2 are the two displays; para 78-79)]
Therefore in light of above discussion it would have been obvious to one of the ordinary skill in the art, before the effective filing date of the claimed invention, to combine the teaching of the prior arts because such combination would provide predictable result with no change of their respective functionalities.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Shahan Rahaman whose telephone number is (571)270-1438. The examiner can normally be reached on 7am - 3:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nasser Goodarzi can be reached at telephone number (571) 272-4195. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from Patent Center. Status information for published applications may be obtained from Patent Center. Status information for unpublished applications is available through Patent Center for authorized users only. Should you have questions about access to Patent Center, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
/SHAHAN UR RAHAMAN/Primary Examiner, Art Unit 2426