DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement(s) (IDS) submitted on 02/14/2024 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically taught as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20190004324) (hereinafter Wang) in view of Kim et al. (US 20170212359) (hereinafter Kim), further in view of Li et al. (US 20170195657) (hereinafter Li).
Regarding claim 1, Wang teaches A multiple-views-one-eye display method with sub-pixels as basic display units, comprising following steps of:
(i) with sub-pixels of a display device as basic display units, arranging a grating device in front of the display device along the light transmission direction, to guide light from a sub-pixel to a corresponding viewing zone; wherein, sub-pixels corresponding to a same viewing zone constitute a sub-pixel group, and a same sub-pixel belongs to only one sub-pixel group at a same time-point (see Wang paragraphs 2, 4, 9, and 52 regarding sub-pixels as display units that correlate one to one with microstructure grating device parts to be grouped into sub-pixel groups that correspond to grating groups and the image from a subpixel group is of a viewpoint of a 3D scene where at least two perspective views are converged on a pupil);
wherein, an image displayed by a sub-pixel group is a perspective view of the displayed 3D scene converging to corresponding viewing zone; wherein, the viewing zones is arranged to guarantee that at least two perspective views, or at least one perspective view and one spliced view, or at least two spliced views will be perceived by a pupil of a viewer, wherein the spliced view refers to the image displayed by a spliced sub-pixel group, and the spliced sub-pixel group is spliced by different complementary parts from different sub-pixel groups (see Wang paragraphs 2, 4, 9, and 52 regarding sub-pixels as display units that correlate one to one with microstructure grating device parts to be grouped into sub-pixel groups that correspond to grating groups and the image from a subpixel group is of a viewpoint of a 3D scene where at least two perspective views are converged on a pupil).
However, Wang does not explicitly teach a control process as needed for the limitations of claim 1.
Kim, in a similar field of endeavor, teaches (ii) loading data to the sub-pixels by a control device (see Kim paragraph 58, 82 regarding control process of loading and passing light data into subgroups of grating elements with time series of grating groups being turned on based on color of subpixel grouping- in combination with Wang, the subgroups may be sequentially on one at a time based on time period),
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to modify Wang to include the teaching of Kim so that in combination with Wang, the subgroups may be sequentially on one at a time based on time period.
One would be motivated to combine these teachings in order to provide methods for more efficiently transmitting a 3D parallax image (see Kim paragraph 58, 82).
However, the combination of Wang and Kim does not explicitly teach a projected 3D scene as needed for the limitations of claim 1.
Li, in a similar field of endeavor, teaches with loaded data of a sub-pixel being the projection information of a displayed 3D scene along the light beam from the sub-pixel and reaching to corresponding viewing zone (see Li paragraphs 7 and 23-24 regarding cylindrical grating and data of sub-pixels for a 3D scene from different viewpoints according to projected mapping- in combination with Wang, the data may be of a 3D scene with subpixels reaching to corresponding viewing zones with a cylindrical grating);
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to modify the combination of Wang and Kim to include the teaching of Li so that in combination with Wang, the data may be of a 3D scene with subpixels reaching to corresponding viewing zones with a cylindrical grating.
One would be motivated to combine these teachings in order to enhance the process of displaying a 3D scene to a viewer (see Li paragraphs 7 and 23-24).
Regarding claim 2, the combination of Wang, Kim, and Li teaches all aforementioned limitations of claim 1, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, and Li teaches wherein a grating unit of the grating device is a cylindrical lens, or a slit (see Li paragraphs 7 and 23-24 regarding cylindrical grating and data of sub-pixels for a 3D scene from different viewpoints according to projected mapping- in combination with Wang, the data may be of a 3D scene with subpixels reaching to corresponding viewing zones with a cylindrical grating).
One would be motivated to combine these teachings in order to One would be motivated to combine these teachings in order to enhance the process of displaying a 3D scene to a viewer (see Li paragraphs 7 and 23-24)..
Regarding claim 3, the combination of Wang, Kim, and Li teaches all aforementioned limitations of claim 1, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, and Li teaches wherein the grating device is composed of microstructure units, with the microstructure units corresponding to the sub-pixels of the display device in a one-to-one manner (see Wang paragraphs 2, 4, 9, and 52 regarding sub-pixels as display units that correlate one to one with microstructure grating device parts to be grouped into sub-pixel groups that correspond to grating groups with adjace and the image from a subpixel group is of a viewpoint of a 3D scene where at least two perspective views are converged on a pupil).
Regarding claim 4, the combination of Wang, Kim, and Li teaches all aforementioned limitations of claim 2, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, and Li teaches wherein step (i) further comprises dividing the grating units into T grating-unit groups, with adjacent T grating units belonging to different grating-unit groups; and in step (ii) further comprises gating the T grating-unit groups by the control device at T time-points of a time-period sequentially, with only one grating-unit group being turned on at each time-point; wherein T is greater than or equal to 2 (see Kim paragraph 58, 82 regarding control process of loading and passing light data into subgroups of grating elements with time series of grating groups being turned on based on color of subpixel grouping such that adjacent pixels are in T different groups greater than 2 of M different colors greater than 2 with grating unit groups corresponding to M colors in a one to one manner and only one color being emitted by the group- in combination with Wang, the subgroups may be sequentially on one at a time based on time period).
One would be motivated to combine these teachings in order to provide methods for more efficiently transmitting a 3D parallax image (see Kim paragraph 58, 82).
Regarding claim 5, the combination of Wang, Kim, and Li teaches all aforementioned limitations of claim 2, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, and Li teaches wherein in step (i) further comprises, respectively emitting light of M kinds of colors by sub-pixels of the display device, and dividing the grating units into M grating-unit groups, with adjacent M grating units belonging to different grating-unit groups, wherein M is greater than or equal to 2;wherein, the M grating-unit groups correspond to the M colors in a one-to-one manner, with a grating-unit group allowing light of corresponding color passing through and blocking light of other (M-1) kinds of non-corresponding colors (see Kim paragraph 58, 82 regarding control process of loading and passing light data into subgroups of grating elements with time series of grating groups being turned on based on color of subpixel grouping such that adjacent pixels are in T different groups greater than 2 of M different colors greater than 2 with grating unit groups corresponding to M colors in a one to one manner and only one color being emitted by the group- in combination with Wang, the subgroups may be sequentially on one at a time based on time period).
One would be motivated to combine these teachings in order to provide methods for more efficiently transmitting a 3D parallax image (see Kim paragraph 58, 82).
Claim(s) 6-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al. (US 20190004324) (hereinafter Wang) in view of Kim et al. (US 20170212359) (hereinafter Kim), further in view of Li et al. (US 20170195657) (hereinafter Li), and further in view of Makinen et al. (US 20200371378) (hereinafter Makinen).
Regarding claim 6, the combination of Wang, Kim, and Li teaches all aforementioned limitations of claim 1, and is analyzed as previously discussed.
However, the combination of Wang, Kim, and Li does not explicitly teach a projection device as needed for the limitations of claim 6.
Makinen, in a similar field of endeavor, teaches wherein step (i) further comprises placing a projection device at a position corresponding to the display device, for projecting an enlarged image of the display device (see Makinen paragraphs 16, 180, and 203 regarding projection device, and relay lens in a relay system with micromirrors in a waveguiding arrangement- in combination with Wang, Kim, and Li, the projection and relay device may be incorporated into the display system of Wang, Kim, and Li).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to modify the combination of Wang, Kim, and Li to include the teaching of Makinen so that in combination with Wang, Kim, and Li, the projection and relay device may be incorporated into the display system of Wang, Kim, and Li.
One would be motivated to combine these teachings in order to enhance the physical framework of a 3D display (see Makinen paragraphs 16, 180, and 203).
Regarding claim 7, the combination of Wang, Kim, Li, and Makinen teaches all aforementioned limitations of claim 6, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, Li, and Makinen teaches wherein step (i) further comprises placing a relay device into the light path, for guiding light from the display device to a viewer (see Makinen paragraphs 16, 180, and 203 regarding projection device, and relay lens in a relay system with micromirrors in a waveguiding arrangement- in combination with Wang, Kim, and Li, the projection and relay device may be incorporated into the display system of Wang, Kim, and Li).
One would be motivated to combine these teachings in order to enhance the physical framework of a 3D display (see Makinen paragraphs 16, 180, and 203).
Regarding claim 8, the combination of Wang, Kim, Li, and Makinen teaches all aforementioned limitations of claim 7, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, Li, and Makinen teaches wherein the relay device is a reflective surface, or a semi-transparent and semi-reflective surface, or a combination of free-form surfaces, or an optical waveguide (see Makinen paragraphs 16, 180, and 203 regarding projection device, and relay lens in a relay system with micromirrors in a waveguiding arrangement- in combination with Wang, Kim, and Li, the projection and relay device may be incorporated into the display system of Wang, Kim, and Li).
One would be motivated to combine these teachings in order to enhance the physical framework of a 3D display (see Makinen paragraphs 16, 180, and 203).
Regarding claim 9, the combination of Wang, Kim, and Li teaches all aforementioned limitations of claim 1, and is analyzed as previously discussed.
However, the combination of Wang, Kim, and Li does not explicitly teach eye tracking as needed for the limitations of claim 9.
Makinen, in a similar field of endeavor, teaches wherein step (ii) further comprises tracking a position of a viewer's pupils real-timely by a tracking device (see Makinen paragraphs 133, 164, 199, 223, and 240 regarding real-time eye/pupil tracking and image refresh rates for calculation of image to transmit only to eyes for 3D display of sub-pixel content- in combination with Wang, Kim, and Li, the eye tracking and display refresh may be combined with the display system of Wang, Kim, and Li to track user eye movement and refresh the image based on position).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the application to modify the combination of Wang, Kim, and Li to include the teaching of Makinen so that in combination with Wang, Kim, and Li, the eye tracking and display refresh may be combined with the display system of Wang, Kim, and Li to track user eye movement and refresh the image based on position.
One would be motivated to combine these teachings in order to provide flexibility for a 3D display when a user moves their eyes (see Makinen paragraphs 133, 164, 199, 223, and 240).
Regarding claim 10, the combination of Wang, Kim, Li, and Makinen teaches all aforementioned limitations of claim 9, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, Li, and Makinen teaches wherein step (ii) further comprises refreshing the loaded data of sub-pixels whose emitting light beams reach to a pupil according to real-time position of the pupil; wherein, refreshed data of a sub-pixel whose emitting light beams reach to a pupil is the projection information of a displayed 3D scene along the sub-pixel's emitting light beam which reaches to the pupil (see Makinen paragraphs 133, 164, 199, 223, and 240 regarding real-time eye/pupil tracking and image refresh rates for calculation of image to transmit only to eyes for 3D display of sub-pixel content- in combination with Wang, Kim, and Li, the eye tracking and display refresh may be combined with the display system of Wang, Kim, and Li to track user eye movement and refresh the image based on position).
One would be motivated to combine these teachings in order to provide flexibility for a 3D display when a user moves their eyes (see Makinen paragraphs 133, 164, 199, 223, and 240).
Regarding claim 11, the combination of Wang, Kim, Li, and Makinen teaches all aforementioned limitations of claim 9, and is analyzed as previously discussed.
Furthermore, the combination of Wang, Kim, Li, and Makinen teaches wherein the tracking device is connected to the control device and is controlled by the control device to track a position of a viewer's pupils real-timely (see Makinen paragraphs 133, 164, 199, 223, and 240 regarding real-time eye/pupil tracking and image refresh rates for calculation of image to transmit only to eyes for 3D display of sub-pixel content- in combination with Wang, Kim, and Li, the eye tracking and display refresh may be combined with the display system of Wang, Kim, and Li to track user eye movement and refresh the image based on position).
One would be motivated to combine these teachings in order to provide flexibility for a 3D display when a user moves their eyes (see Makinen paragraphs 133, 164, 199, 223, and 240).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Matthew D Kim whose telephone number is (571)272-3527. The examiner can normally be reached Monday - Friday: 9:30am - 5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joseph Ustaris can be reached at (571) 272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW DAVID KIM/Primary Examiner, Art Unit 2483