DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 9 is objected to because of the following informalities: claim 9 recites “the threshold guide”. There is no antecedent basis for this feature. Appropriate correction is required.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-15, 17, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Neuman et al. (US. Patent App. Pub. No. 2014/0049536, “Neuman”, hereinafter).
As per claim 1, as shown in Fig. 1 and 3, Neuman teaches a system for visualization comprising:
a camera rig comprising a plurality of cameras configured to capture images from one or more perspectives (¶ [32] and also ¶ [5]); and
at least two visual guides (further addressed below. See Fig. 3, ¶ [39-40], boundary surface lines 311 and 321 defining regions 1 and 2) defining a volumetric space within an environment (¶ [16]) wherein the volumetric space is updated based on one or more parameters associated with the camera rig (see ¶ [49], “The content or animated/modeled scene 550 is filmed or rendered based upon position and other settings or parameters (such as lens setting, axis, toe in, and the like) of at least two pairs 560, 564 of virtual cameras 561 (first left camera), 562 (first right camera), 565 (second left camera), and 566 (second right camera)”).
Neuman does not explicitly teach the boundary surface 311 and 321 are visual. However, Neuman does teach at ¶ [39], that “the system 300 has boundary surface 311 that is used to define a first region 313 in the shot or scene in which the first camera 310 will be used to capture 3D data”, and similarly for boundary surface 321. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to configure these surfaces as visual guidelines for the captured regions where the objects are located since this is intrinsically taught to define (or pre-define) to avoid wasted space (see ¶ [41]).
As per claim 2, Neuman also teaches wherein the parameters of the camera rig may be adjusted manually to define an optimal volumetric space (¶ [52], i.e., by using a user interface “… to assign a particular camera rig or pair of cameras to that object or objects and also to define camera parameters or settings for each rig 560, 564” in order to avoid wasted space addressed in claim 1).
As per claim 3, Neuman does impliedly teach wherein the parameters of the camera rig may be adjusted automatically to define an optimal volumetric space (see Fig. 6, ¶ [55], method 600 can be implemented using software).
As per claim 4, as shown in Fig. 5, Neuman does also teach a physical display associated with the camera rig such that the volumetric space is defined by properties of the physical display (¶ [54], 3D-capable display devices 514).
As per claim 5, Neuman further teaches wherein the camera rig is physical, virtual, or a combination of the two (Fig. 3).
As per claim 6, Neuman does also teach wherein the one or more parameters associated with the camera rig include at least one of a focal length of one or more of the plurality of cameras (¶ [7]), interocular distance between one or more of the plurality of cameras, a convergence point of camera views (¶ [57]), a scale of the camera rig, a position of one or more of the plurality of cameras, and an orientation of one or more of the plurality of cameras.
As per claim 7, Neuman further teaches wherein the camera rig is configured to capture one or more perspectives to generate a three-dimensional (3D) visualization (¶ [16]).
As per claim 8, Neuman impliedly teach wherein the at least two visual guides include:
a visual pop guide defining an optimal position for forward projection of an object in a scene (as addressed in claim 1, boundary surface 311 and 321); and a visual depth guide defining an optimal position for depth positioning of the object in the scene (¶ [40] referring to Fig. 3); wherein an area between the visual pop guide and the visual depth guide defines the volumetric space (Fig. 4, ¶ [46], “Further, a blending or transition region 484 is shown between the images 482, 489 as defined by transitions 486, 488 associated with boundary surfaces defining first and second regions for which camera pairs/rigs collected 3D data (e.g., background and foreground regions of a scene or shot)”).
As per claim 9, wherein the threshold guide (see claim objection above) defines a theoretical distance or upper limit for forward projection of certain objects in a scene (distance d2 shown in Fig. 3, ¶ [39]) where captured content appears focused on the physical display while existing beyond recommended limits of the visual pop guide (such as object 303, Fig. 3, ¶ [42]); and further comprising:
a two-dimensional (2D) guide defining both the front surface of a physical display device (¶ [58], boundary surface) and the convergence point of the captured views (¶ [57], by defining interaxial distances and convergence angles using operational parameters of the camera rig) wherein part of an object intersecting the visual guide will have neither negative nor positive parallax, and subject to the type of object, may appear two-dimensional (2D) (¶ [6-8], i.e., by eliminating these problems recited in ¶ [14]).
As per claim 10, as addressed in claim 2 referring to Fig. 5, ¶ [52], Neuman does teach graphic user interface and production tool configured to enable a user to visualize, modify, and automate the one or more parameters associated with the camera rig.
Claim 11, which is similar in scope to claim 1 as addressed above, is thus rejected under the same rationale.
Claim 12, which is similar in scope to claim 6 as addressed above, is thus rejected under the same rationale.
Claim 13, which is similar in scope to claim 7 as addressed above, is thus rejected under the same rationale.
Claim 14, which is similar in scope to claim 8 as addressed above, is thus rejected under the same rationale.
Claim 15, which is similar in scope to claim 9 as addressed above, is thus rejected under the same rationale.
As per claim 17, Neuman further teaches adjusting a scale of the camera rig, based on the determined position of the forward or rear-most geometry of the object or scene, to align the two-dimensional (2D) guide with a center point of the object (¶ [66] with reference to Fig. 7, “Since the region bounded by near1 and near2 is controlled solely by the first deep image, li,j can be represented as a straight line segment aligned with the viewing direction of deep pixel (i, j) of this deep image”).
Claim 20, which is similar in scope to claim 1 as addressed above, is thus rejected under the same rationale.
Allowable Subject Matter
Claims 16, 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The prior art taken singly or in combination does not teach or suggest, a method, among other things, comprising:
…casting an invisible plane forward from the camera rig;
reverting a cast distance when the invisible plane hits an object in the virtual environment; and
continuously reducing the cast distance to determine a position of a forward or rear-most geometry of the object or scene (claim 16); or
adjusting an interocular distance between one or more of the plurality of cameras until the pop guide aligns to the determined position of the forward or rear-most geometry of the object (claim 18).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hau H. Nguyen whose telephone number is: 571-272-7787. The examiner can normally be reached on MON-FRI from 8:30-5:30.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard, can be reached on (571) 272-7773.
The fax number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/HAU H NGUYEN/Primary Examiner, Art Unit 2611