DETAILED ACTION
Allowable Subject Matter
Claims 10 - 17 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
§ 112(f) interpretation despite the absence of “means.”
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a scaling transformation unit”, “an image generation unit”, “a display control unit” for claims 1-19, “an angle detection unit” for claims 7-14, “an angle setting unit”, “an angle control unit” for claims 10-14, and “a clipping unit” for claims 15-17, and “an eyepoint position detection unit” in claim 18.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 7-9, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Vesely (US 9354718 B2) in view of Arakita (US 20130009957 A1).
As per claim 1, Vesely teaches the claimed:
1. An image processing device comprising:
(Vesely teaches of using a perspective rendered 424 in figure 4. In order to produce a correct perspective rendering, scaling would have to be used as part of the process)
according to an angle of a display surface with respect to a horizontal plane of a real space;
(Vesely (18): “The rendering may be intended for a display which may be positioned horizontally (e.g., parallel to a table surface or floor) in reference to a standing viewpoint.”
Vesely (46): “Stereo display angle determiner 406 may be configured to determine a display angle of display 404. For example, in the embodiment shown in FIG. 3, a tilting mechanism may enable display 406 to be tilted. Stereo display angle determiner 406 may be configured to determine the angle of the display 406 tilt and provide the information regarding the display angle for further processing to enable modification of the perspective of the 3D rendered images to adjust to the display angle.”)
an image generation unit that generates a stereoscopic image to be displayed on the display surface, by using the stereoscopic object having been subjected
(In col 12, lines 43-52 and in figure 4 where the perspective rendered 424 is an image generation unit. Col 12, lines 43-52 makes mention of “… The rendered left and right eye images may also be based on the determined display plane position and/or orientation value and/or the display size/resolution”).
a display control unit that displays the stereoscopic image on the display surface.
(In figure 1 shows display control units 150A and 150B in Vesely).
Vesely alone does not explicitly teach the remaining claim limitation. It is noted that Vesely does not explicitly mention an “scaling transformation unit” per se. Arakita teaches that it was known in the art to use a “scaling transformation unit” in [0063]: “Further, the scaling processing unit 1362e is a processing unit that determines an enlargement ratio or a reduction ratio of volume data when it is requested to enlarge or reduce a parallax image group.”
The reference are combined by incorporating the scaling processing unit from Arakita to help perform the scaling processing in Vesely.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the scaling transformation unit as taught by Arakita with the system of Vesely in order to help perform the mathematical calculations required in order to produce the correct perspective rendering of the image data as seen from a given viewpoint.
As per claim 20, this claim is similar in scope to limitations recited in claim 1, and thus is rejected under the same rationale.
As per claim 2, Vesely teaches the claimed:
2. The image processing device according to claim 1, wherein the scaling transformation unit scales down the stereoscopic object as the angle decreases, whereas the scaling transformation unit scales up the stereoscopic object as the angle increases.
(Vesely (82): “For example, if the user moves the display to have a new angle with respect to the user's viewpoint (e.g., for each of viewing), then the method may update the projection of the stereoscopic images for that display to insure the rendered projective images reflect the perspective based on the new relationship between the user's viewpoint and the position of the display. Thus, the method may continually update the perspective driven projection of stereoscopic images of a 3D image based on changes in the 3D image, e.g., due to stylus interaction, underlying changes in the graphics scene, etc., changes in the user's viewpoint, and/or changes in the position or orientation of the display(s).”
Vesely teaches the rendered images to reflect the perspective based on the user’s viewpoint and display angle, and states that this method can continually update the perspective (scaling) based on changes such as the position or orientation of the display (display angle). Therefore, it is obvious to say that depending on the direction of the display angle, this method will scale the stereoscopic images in the proper way, such that if the angle decreases then scale down and if the angle increases then scale up to follow proper perspective changes for the user.)
As per claim 3, Vesely teaches the claimed:
3. The image processing device according to claim 2, wherein the scaling transformation unit performs the scaling transformation in a predetermined range of the angle.
(Vesely (46): “Stereo display angle determiner 406 may be configured to determine the angle of the display 406 tilt and provide the information regarding the display angle for further processing to enable modification of the perspective of the 3D rendered images to adjust to the display angle. Stereo display angle determiner 406 may also be configured to store the determined angle in memory in display system 402. … Stereo display angle determiner 406 may additionally permit user entry of a display angle.”
Vesely teaches the angles stored in memory and angles that are given by the user, these can be considered predetermined range of angles, as they are given before any determination of the display angles. Also, these angles provide information for further processing to enable modification of the perspective (scaling transformation).)
As per claim 4, Vesely teaches the claimed:
4. The image processing device according to claim 1, wherein the scaling transformation unit performs the scaling transformation such that the stereoscopic object is placed in the display surface when viewed from an eyepoint position of an observer.
(Vesely (78): “In one embodiment, in order to provide the stereoscopic image with the proper perspective, the user's viewpoint position and angle relative to the display may be determined. For example, the method may determine the angle formed between the plane of the display surface and the line of sight of the user. … Thus, the determination of the viewpoint not only determines what angle of the scene the user sees in a 3D image (which is the typical purpose of head tracking), but also how the user sees it (i.e., the perspective of the 3D image corresponds to the user's changing viewpoint). … Additionally, while the above descriptions refer to angles in only two dimensions, it should be understood that the viewpoint location angle may also be determined and adjusted for in the third dimension (e.g., not just z and y, but also z and x, as well as pitch, yaw and roll among other possibilities).”
Vesely teaches the adjusting (scaling) of the 3D image (stereoscopic object) based on the viewpoint location, which corresponds to when the user views from an eyepoint position. Vesely states that the viewpoint location angle should be adjusted for in the third dimension, which includes when the stereoscopic object is placed in the display when the user views it in order to provide the proper perspective.)
As per claim 7, Vesely teaches the claimed:
7. The image processing device according to claim 1, further comprising
a display unit that has the display surface and a structure forming the angle; and
(Vesely FIG. 3, 5, & 6 shows a display surface with a structure forming an angle.)
an angle detection unit that detects the angle formed by the structure, wherein the scaling transformation unit performs the scaling transformation according to the angle detected by the angle detection unit.
(Vesely (46): “Stereo display angle determiner 406 may be configured to determine a display angle of display 404. For example, in the embodiment shown in FIG. 3, a tilting mechanism may enable display 406 to be tilted. Stereo display angle determiner 406 may be configured to determine the angle of the display 406 tilt and provide the information regarding the display angle for further processing to enable modification of the perspective of the 3D rendered images to adjust to the display angle.”
Vesely teaches the stereo display angle determiner, which corresponds to the angle detection unit, that provides information for further processing to enable modification of the perspective (scaling transformation according to the display angle) that adjusts to the display angle.)
As per claim 8, Vesely teaches the claimed:
8. The image processing device according to claim 7, wherein the structure allows the display surface to pivot about one end of the display surface (Vesely (41): “FIG. 2 illustrates another embodiment of the system 100, shown as 200A and 200B. In this embodiment, the system may be a foldable and/or portable system (e.g., similar to a laptop or tablet device) where the user may have the system 200 open (as shown in 200A) or closed (as shown in 200B).”)
As per claim 9, Vesely teaches the claimed:
9. The image processing device according to claim 7, wherein the structure allows the display surface to pivot about a position other than one end of the display surface.
(Vesely FIG. 3 shows an image of the pivot at a position not at the end of the display surface.)
As per claim 18, Vesely teaches the claimed:
18. The image processing device according to claim 1, further comprising
an eyepoint position detection unit that detects an eyepoint position of an observer of the stereoscopic image, wherein the image generation unit generates the stereoscopic image on a basis of the eyepoint position.
(Vesely (54): “Tracking system 412 may be coupled to display system 402 and content processing system 420. In various embodiments, tracking system may be configured to determine a user view position and orientation value and a user control position and orientation value based on at least a portion of the user view position and orientation information and user control position and orientation information, respectively, received from the one or more tracking sensors 406.”
Vesely (78): “Thus, the determination of the viewpoint not only determines what angle of the scene the user sees in a 3D image (which is the typical purpose of head tracking), but also how the user sees it (i.e., the perspective of the 3D image corresponds to the user's changing viewpoint). The particular methods by which the 3D image is rendered in order to achieve this effect are varied.”
Vesely teaches the tracking system, which includes determining a user view position, which corresponds with the eyepoint position, and in paragraph 78, talks about how the determination of the view point affects what angle of the scene the user sees in a 3D image, where that 3D image is rendered and therefore generates an image based on the eyepoint position.)
As per claim 19, Vesely teaches the claimed:
19. The image processing device according to claim 1, wherein the image generation unit
renders the stereoscopic object,
(Vesely (31): “Accordingly, the 3D scene may be rendered from the perspective of the user such that user can view the 3D scene with minimal distortions (e.g., since it is based on the eyepoint of the user). Thus, the 3D scene may be particularly rendered for the eyepoint of the user, using the position input device. In some embodiments, each eyepoint may be determined separately, or a single eyepoint may be determined and an offset may be used to determine the other eyepoint.”)
generates virtual eyepoint images for both eyes when viewed from an eyepoint position of an observer of the stereoscopic image, and
(Vesely (78-79): “In one embodiment, in order to provide the stereoscopic image with the proper perspective, the user's viewpoint position and angle relative to the display may be determined. … As illustrated at 704, rendered left and right eye images may be generated based on the determined user view position and orientation values and/or user control position and orientation values.”)
transforms the virtual eyepoint images to the stereoscopic image to be displayed on the display surface.
(Vesely (80): “At 706, the rendered left and right eye images may be provided to the display. The rendered images may be accurate to within 1 millimeter in terms of position resulting in a fluid, realistic display environment.”)
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Vesely in view of Arakita and in further view of Yang (CN 102340636 A).
As per claim 5, Arakita and Vesely alone do not explicitly teach the claimed limitations.
However, Arakita and Vesely in combination with Yang teaches the claimed:
5. The image processing device according to claim 1, wherein the scaling transformation unit performs the scaling transformation to keep a ratio of a height and a width of the stereoscopic object.
(Yang [0039]: “In order to achieve three-dimensional image under the condition that keep the width and height ratio of, on the display screen at the maximum area of whole display, stereoscopic picture scaling needed to reach this purpose, needs to meet the ratio of scaling must be solid image filled with display screen along the horizontal direction or the vertical direction”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the width and height ratio as taught by Yang with the system of Vesely as modified by Arakita in order to have accurate depth perception and visual of the stereoscopic object without significant geometrical distortions in the image.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Vesely view of Arakita and in further view of Nishikawa (US 20160037149 A1).
As per claim 6, Arakita and Vesely alone do not explicitly teach the claimed limitations.
However, Arakita and Vesely in combination with Nishikawa teaches the claimed:
6. The image processing device according to claim 1, wherein the scaling transformation unit performs the scaling transformation only in a height direction of the stereoscopic object.
(Nishikawa [0074]: “Further, in a case where the resolution is doubled from 600 ppi to 1200 ppi, for example, by performing the nearest neighbor method or the like, it is also possible to finely adjust a height by using area gradation. In this case, a height achieved by performing superimposing printing one time, for example, can be selected from four levels, and even in a case where the maximum number of times of superimposing printing is 100, it is possible to realize 400-level tone height representation. In this manner, in a case where the resolution is increased by a factor of two or more, the number of representation tone levels can be made larger than the number of times of superimposing printing.”
Nishikawa teaches a resolution conversion, which is scaling content to another resolution, and in the passage above, Nishikawa teaches that adjusting only in the height direction is possible. This paragraph talks about extracting the shape data from the stereoscopic object and is able to convert this data into a higher resolution, which incorporates scaling. As shown in FIG. 8, the flowchart shows that the shape data, after adjusting and scaling the height, is then output to generate stereoscopic object output data, which would include the height only scaled transformation.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the height only scaled transformation as taught by Nishikawa with the system of Vesely as modified by Arakita in order to scale content only in the height direction without changing the width and allow for users to have more freedom to change the aspect ratio of the object or image.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA SUO whose telephone number is (571) 272-8387. The examiner can normally be reached Mon-Fri 8am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached on (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA SUO/Examiner, Art Unit 2616
/DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616