DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-2, 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by KEISER et al. (US 20120188452 A1; hereafter KEISER).
As of Claim 1: KEISER teaches a method, comprising: obtaining photographing parameters of a camera (¶0131 and note that one camera model for each physical camera, the camera model comprising parameters that describe the position and orientation of the camera with respect to the scene, and optical parameters of the camera.) ; determining, based on the photographing parameters of the camera and a reference dataset, target virtual scene parameters corresponding to virtual content to be photographed by the camera (¶¶0119,0120-0123 and note that automatically computing 73 one or more sets of virtual camera parameters; branching execution, if 74 more than one set of virtual camera parameters has been computed; selecting 75 one of the sets;] generating 76 a virtual camera flight path; a virtual video stream; and storing or transmitting 78 the virtual video stream), the reference dataset comprising a correspondence between reference photographing parameters and virtual scene parameters; and displaying the virtual content according to the target virtual scene parameters (¶¶0027-0028,0075,0105).
As of Claim 2: KEISER further teaches the photographing parameters of the camera are set in real time on the camera and are configured for photographing a fused picture of a real scene and virtual content displayed on a display device; and the method further comprises causing the camera to photograph the fused picture of the real scene and the virtual content displayed on the display device (¶¶0075-0076,0080,0085 and note that computed the virtual video stream, transmitting the virtual video stream to the video server and controlling the video server to concatenate the first video sequence, the virtual video stream and the second video sequence.).
As of Claim 15: KEISER further teaches causing the camera to photograph the displayed virtual content (¶0105 and note that the virtual replay unit 13 is usually equipped to generate its own video stream output 24. These video streams are displayed on video display devices 18 and/or transmitted via a transmitter 17.). .
As of Claim 16: KEISER further teaches the camera is a video camera (¶¶0107).
As of Claim 17: KEISER teaches a computing device (¶0105), comprising: one or more processors; and memory storing instructions (¶¶0133-0137), when executed by the one or more processors, cause the computing device to: obtain photographing parameters of a camera; determine, based on the photographing parameters of the camera and a reference dataset, target virtual scene parameters corresponding to virtual content to be photographed by the camera (¶¶0133-0137 and note that a computer-readable camera database is created, comprising the camera model and in particular the fixed camera parameters for later retrieval during processing of a scene. A plurality of images, representing a plurality of views, is analysed, giving for each image the location and identity of the features such as field markings in the image ("what does the camera see in this image"), and associated camera parameters ("where is the camera looking in this image"). In other words, the calibration database represents the characteristics of known features, in particular their location within an image, in association with camera parameters, in a densely sampled potential camera configuration space), the reference dataset comprising a correspondence between reference photographing parameters and virtual scene parameters; and display the virtual content according to the target virtual scene parameters (¶¶0108-0111,0140 and note that a new video sequence showing the scene before the KEYFRAME during a predefined time (pre-sequence length), stopping and displaying a flight of a virtual camera from one of the given views; i.e., from the given angle into a completely virtual perspective (different from the real camera(s)), preferably a time-freeze sequence, and then a flight of the virtual camera back to the initial given view, or to the view of another camera, and then continuing playback in the corresponding real camera.).
As of Claim 18: KEISER further teaches the photographing parameters of the camera are set in real time on the camera and are configured for photographing a fused picture of a real scene and virtual content displayed on a display device (¶¶0108-0111); and the instructions, when executed by the one or more processors, cause the computing device to: cause the camera to photograph the fused picture of the real scene and the virtual content displayed on the display device (¶¶0108-0111,0140 and note that FIG. 2 schematically shows the concatenation of video sequences from different video streams: Given the stored source image sequences 10, 10', individually labelled as a, b, c, d, an operator selects a frame from one of the source image sequences 10, 10', thereby also selecting a corresponding point in time or keyframe point tk. The selected frame is called reference keyframe. Frames from the other source image sequences 10, 10' taken at the same time shall be called further keyframes. Video streams are denoted by a, b, c, d. Individual frames are denoted by a_t, b_t, etc., where t is the time at which the frame was recorded. For the keyframes, the time (or keyframe point in time) is denoted by tk. Video sequences; that is, short (several seconds or minutes) continuous subsections of a video stream; are denoted by aS1, cS2, V (left half of FIG. 2). The virtual replay unit 13 generates a virtual video stream V as seen from the virtual camera 11 and preferably combines this with an introductory video sequence aS1 leading up to the keyframe point, and a subsequent video sequence cS2 continuing for a short time after the keyframe point (right half of FIG. 2). Preferably, the virtual video stream V corresponds to a movement of the virtual camera 11 from the pose of one source camera 9 to another source camera 9', along a virtual camera flight path 20).
As of Claim 19: KEISER further teaches the reference photographing parameters are configured for describing photographing parameters of a reference camera (¶0108 and note that the process of determining the camera parameters of physical cameras, as they change over time is called camera calibration. In principle, this can be done by measuring the parameters by dedicated hardware. In practice, calibration is preferably done based on the camera's video stream alone, by using, for example, a combination of: ¶0109 discloses a priori position information generated in a pre-processing stage, based on the detection of characteristic scene features, such as playing field markings (line positions, corners, circles, etc.); and ¶0110 discloses online orientation information based on characteristic scene features and/or on the differences between the frames of a video stream) , and the virtual scene parameters are configured to describe photographing parameters of a virtual camera in a virtual scene (¶0107 and note that the virtual views are described by virtual camera parameters. A virtual camera flight path 20 is a sequence of virtual views and can be described by a change (or course or trajectory) of virtual camera parameters over (simulated) time and defines a movement of the virtual camera 11.).
As of Claim 20: All the limitations are addressed in Claim 17. Moreover, KEISER teaches a non-transitory computer-readable storage medium storing instructions, when executed (¶0092).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over KEISER et al. (US 20120188452 A1; hereafter KEISER) in view of Tytgat (US 20120007943 A1).
As of Claim 3: Tytgat is a similar or analogous system to the claimed invention as evidenced Tytgat teaches when capturing a scene with two or multiple cameras, the characteristics of these cameras are often needed in order to efficiently use captured data or information, for instance for deriving 3-D information of objects in the scene. These characteristics typically include the location of the cameras, the orientation of the cameras and the internal camera parameters (also called intrinsic parameters) as for instance resolution, field of view, skew, etc. in ¶0002 that would have prompted a predictable variation of KEISER by applying Tytgat’s known principal of the reference photographing parameters are configured for describing photographing parameters of a reference camera (¶¶001-0015), the virtual scene parameters are configured to describe photographing parameters of a virtual camera in a virtual scene (¶0018 and note that real` reference points, and `virtual` reference points; the first being the actual points in the real world, and the second being the reference points that are back-projected from the imaging device into the virtual 3D space (real and virtual should be equal in a `perfect` system; i.e. a system without capturing errors). The at least four real reference points can be seen as real points, being points of the real world, that are visible by both the first and the second imaging device. The at least four virtual reference points can be seen as the points that are derived from the projected reference points from an imaging device.), and the method further comprises: configuring the reference dataset according to the photographing parameters of the reference camera and the photographing parameters of the virtual camera (¶¶0062-0065).
In view of the motivations such as optimizing the alignment between the image of the reference points on the second image and the virtual reference points as disclosed in ¶0016 of Tytgat thereby further improving image quality one of ordinary skill in the art would have implemented the claimed variation of the prior art system of KEISER.
Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
Claims 4&7 are rejected under 35 U.S.C. 103 as being unpatentable over KEISER et al. (US 20120188452 A1; hereafter KEISER) in view of Tytgat (US 20120007943 A1), in view of Kotake (US 20170249752 A1).
As of Claim 4: KEISER further teaches the reference camera comprises a plurality of groups of photographing parameters (¶0007 and note that), and a configuration process of the reference dataset comprises: obtaining a first group of photographing parameters of the reference camera, and photographing a real image of a reference object based on the first group of photographing parameters using the reference camera, wherein the first group of photographing parameters is one of the plurality of groups of photographing parameters (¶0007 and note that one source camera provides a source image sequence of a scene. The pose and optical settings of the camera, defining a view of the camera, are described by a set of camera parameters.); adjusting a virtual object corresponding to the reference object by adjusting the virtual scene parameters; photographing the virtual object based on the first group of photographing parameters using the reference camera, to obtain a virtual image (¶0008 and note that an input unit accepting a user input which defines a reference keyframe from a reference view from the source image sequence, the reference keyframe being a video image from the point in time at which the user wishes the subsequently generated virtual replay to take place Also, ¶¶0023-0026 and note that automatically computing one or more sets of virtual camera parameters comprises the steps of: automatically computing the position of objects in the scene; determining a classification of the situation observed in the scene, in particular by retrieving a user input that specifies this classification; and automatically computing at least one set of virtual camera parameters based on the position of objects in the scene and the classification of the situation), and adding the correspondence to the reference dataset.
Kotake is a similar or analogous system to the claimed invention as evidenced KEISER teaches a generating unit which, if the addition determination unit determines the new piece of reference data is to be added, generates a new piece of reference data, and an updating unit which adds the generated piece of reference data and updates the pieces of reference data that would have prompted a predictable variation of KEISER by applying Kotake’s known principal of comparing the virtual image with the real image, and recording target virtual scene parameters corresponding to a case that the virtual image matches the real image; and establishing a correspondence between the first group of photographing parameters and the target virtual scene parameters (¶0080 and note that mage matching between a captured image and a keyframe but may be any method if the position and orientation of an imaging apparatus can be measured without requiring advance information regarding the position and the orientation. For example, every feature point detected on a captured image may be matched with a feature point detected on a keyframe, and a feature point correspondence obtained thereby may be used for initialization. Feature points and regions detected on a captured image are classified into a plurality of classes, a similar keyframe may be selected for initialization on the basis of the frequencies of the classes of feature points and regions.).
In view of the motivations such as to hold pieces of reference data including a captured image, a position and an orientation of an imaging apparatus when the image is captured as disclosed in ¶0007 of Kotake thereby further improving image quality one of ordinary skill in the art would have implemented the claimed variation of the prior art system of KEISER.
Therefore, the claimed invention would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention.
As of Claim 7: KEISER in view of Tytgat and further in view of Kotake further teaches the determining the target virtual scene parameters (KEISER ¶¶0105-0107) comprises: determining, based on a relationship between the photographing parameters of the camera and the photographing parameters of the reference camera and a correspondence between the photographing parameters of the reference camera (KEISER ¶¶0055-0057) and the photographing parameters of the virtual camera, the target virtual scene parameters corresponding to the virtual content to be photographed by the camera (KEISER ¶¶0112-0114).
Allowable Subject Matter
Claims 5-6, 8-14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
As of Claim 5: the prior art of record fails to teach or fairly suggest the limitations of claim 5, combination with claims 1, 3 &4 , that includes, “wherein: the reference dataset comprises M * N * P groups of photographing parameters, each group of photographing parameters comprises a focal length, an aperture value, and a focal point value, M is a quantity of candidate focal lengths, N is a quantity of candidate aperture values, P is a quantity of candidate focal point values, and M, N, and P are all positive integers; and the first group of photographing parameters comprises an ith candidate focal length, a jth candidate aperture value, and a kth candidate focal point value, wherein i is a positive integer less than M, j is a positive integer less than N, and k is a positive integer less than P; and the adjusting the virtual object comprises: determining the ith candidate focal length as a virtual focal length of the virtual camera, and configuring a virtual aperture value and a virtual focal point value of the virtual camera.”
As of Claim 8: the prior art of record fails to teach or fairly suggest the limitations of claim 8, combination with claims 1, 3, &7 , that includes, “wherein the reference dataset comprises M * N * P groups of photographing parameters, each group of photographing parameters comprises a focal length, an aperture value, and a focal point value, M is a quantity of candidate focal lengths, N is a quantity of candidate aperture values, P is a quantity of candidate focal point values, and M, N, and P are positive integers; and the photographing parameters of the camera comprise an actual focal length, an actual aperture value, and an actual focal point value; and the determining the target virtual scene parameters comprises: determining the actual focal length as a scene focal length of a target virtual scene corresponding to the virtual content to be photographed by the camera; and determining, based on that the actual focal length is consistent with an ith candidate focal length of the reference camera, a scene aperture value and a scene focal point value of the target virtual scene according to a relationship between the actual aperture value and the actual focal point value and a candidate aperture value and a candidate focal point value that are associated with the ith candidate focal length; or determining, based on that the actual focal length is between an ith candidate focal length and an (i+1)th candidate focal length of the reference camera, the scene aperture value and the scene focal point value of the target virtual scene according to a relationship between the actual aperture value and the actual focal point value and a candidate aperture value and a candidate focal point value that are associated with the ith candidate focal length and a relationship between the actual aperture value and the actual focal point value and a candidate aperture value and a candidate focal point value that are associated with the (i+1)th candidate focal length, wherein i is a positive integer less than M.”
As of Claims 5-6, 9-14: Claims 5-6, 9-14 depend from claims that are objected as Allowable subject matter and are objected as allowable as well.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MEKONNEN D DAGNEW whose telephone number is (571)270-5092. The examiner can normally be reached on 8:00AM-5:00PM M-Th.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Lin Ye can be reached on 571-272-7372. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MEKONNEN D DAGNEW/Primary Examiner, Art Unit 2638